00:00:00.001 Started by upstream project "autotest-per-patch" build number 132389 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.125 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:07.071 The recommended git tool is: git 00:00:07.071 using credential 00000000-0000-0000-0000-000000000002 00:00:07.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.088 Fetching changes from the remote Git repository 00:00:07.091 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.108 Using shallow fetch with depth 1 00:00:07.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.108 > git --version # timeout=10 00:00:07.121 > git --version # 'git version 2.39.2' 00:00:07.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.134 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.134 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.609 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.626 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.640 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:13.640 > git config core.sparsecheckout # timeout=10 00:00:13.655 > git read-tree -mu HEAD # timeout=10 00:00:13.672 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:13.695 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:13.695 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.790 [Pipeline] Start of Pipeline 00:00:13.803 [Pipeline] library 00:00:13.804 Loading library shm_lib@master 00:00:13.804 Library shm_lib@master is cached. Copying from home. 00:00:13.820 [Pipeline] node 00:00:13.830 Running on WFP15 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:13.832 [Pipeline] { 00:00:13.842 [Pipeline] catchError 00:00:13.844 [Pipeline] { 00:00:13.857 [Pipeline] wrap 00:00:13.865 [Pipeline] { 00:00:13.873 [Pipeline] stage 00:00:13.874 [Pipeline] { (Prologue) 00:00:14.070 [Pipeline] sh 00:00:14.350 + logger -p user.info -t JENKINS-CI 00:00:14.363 [Pipeline] echo 00:00:14.365 Node: WFP15 00:00:14.370 [Pipeline] sh 00:00:14.667 [Pipeline] setCustomBuildProperty 00:00:14.678 [Pipeline] echo 00:00:14.680 Cleanup processes 00:00:14.685 [Pipeline] sh 00:00:14.972 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:14.972 628090 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:14.986 [Pipeline] sh 00:00:15.264 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.264 ++ grep -v 'sudo pgrep' 00:00:15.264 ++ awk '{print $1}' 00:00:15.264 + sudo kill -9 00:00:15.264 + true 00:00:15.275 [Pipeline] cleanWs 00:00:15.283 [WS-CLEANUP] Deleting project workspace... 00:00:15.283 [WS-CLEANUP] Deferred wipeout is used... 00:00:15.288 [WS-CLEANUP] done 00:00:15.291 [Pipeline] setCustomBuildProperty 00:00:15.300 [Pipeline] sh 00:00:15.579 + sudo git config --global --replace-all safe.directory '*' 00:00:15.683 [Pipeline] httpRequest 00:00:16.058 [Pipeline] echo 00:00:16.060 Sorcerer 10.211.164.20 is alive 00:00:16.070 [Pipeline] retry 00:00:16.073 [Pipeline] { 00:00:16.089 [Pipeline] httpRequest 00:00:16.093 HttpMethod: GET 00:00:16.094 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.095 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.102 Response Code: HTTP/1.1 200 OK 00:00:16.102 Success: Status code 200 is in the accepted range: 200,404 00:00:16.103 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.557 [Pipeline] } 00:00:25.574 [Pipeline] // retry 00:00:25.582 [Pipeline] sh 00:00:25.868 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.885 [Pipeline] httpRequest 00:00:26.383 [Pipeline] echo 00:00:26.385 Sorcerer 10.211.164.20 is alive 00:00:26.394 [Pipeline] retry 00:00:26.395 [Pipeline] { 00:00:26.410 [Pipeline] httpRequest 00:00:26.414 HttpMethod: GET 00:00:26.415 URL: http://10.211.164.20/packages/spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:00:26.415 Sending request to url: http://10.211.164.20/packages/spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:00:26.421 Response Code: HTTP/1.1 200 OK 00:00:26.422 Success: Status code 200 is in the accepted range: 200,404 00:00:26.423 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:02:48.104 [Pipeline] } 00:02:48.124 [Pipeline] // retry 00:02:48.133 [Pipeline] sh 00:02:48.422 + tar --no-same-owner -xf spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:02:50.968 [Pipeline] sh 00:02:51.251 + git -C spdk log --oneline -n5 00:02:51.251 f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:02:51.251 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:02:51.251 a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:02:51.251 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:02:51.251 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:02:51.262 [Pipeline] } 00:02:51.277 [Pipeline] // stage 00:02:51.286 [Pipeline] stage 00:02:51.289 [Pipeline] { (Prepare) 00:02:51.306 [Pipeline] writeFile 00:02:51.323 [Pipeline] sh 00:02:51.606 + logger -p user.info -t JENKINS-CI 00:02:51.616 [Pipeline] sh 00:02:51.895 + logger -p user.info -t JENKINS-CI 00:02:51.905 [Pipeline] sh 00:02:52.192 + cat autorun-spdk.conf 00:02:52.192 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.192 SPDK_TEST_NVMF=1 00:02:52.192 SPDK_TEST_NVME_CLI=1 00:02:52.192 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.192 SPDK_TEST_NVMF_NICS=e810 00:02:52.192 SPDK_TEST_VFIOUSER=1 00:02:52.192 SPDK_RUN_UBSAN=1 00:02:52.192 NET_TYPE=phy 00:02:52.208 RUN_NIGHTLY=0 00:02:52.245 [Pipeline] readFile 00:02:52.261 [Pipeline] withEnv 00:02:52.263 [Pipeline] { 00:02:52.270 [Pipeline] sh 00:02:52.547 + set -ex 00:02:52.547 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:52.547 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:52.547 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.547 ++ SPDK_TEST_NVMF=1 00:02:52.547 ++ SPDK_TEST_NVME_CLI=1 00:02:52.547 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.548 ++ SPDK_TEST_NVMF_NICS=e810 00:02:52.548 ++ SPDK_TEST_VFIOUSER=1 00:02:52.548 ++ SPDK_RUN_UBSAN=1 00:02:52.548 ++ NET_TYPE=phy 00:02:52.548 ++ RUN_NIGHTLY=0 00:02:52.548 + case $SPDK_TEST_NVMF_NICS in 00:02:52.548 + DRIVERS=ice 00:02:52.548 + [[ tcp == \r\d\m\a ]] 00:02:52.548 + [[ -n ice ]] 00:02:52.548 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:52.548 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:52.548 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:52.548 rmmod: ERROR: Module irdma is not currently loaded 00:02:52.548 rmmod: ERROR: Module i40iw is not currently loaded 00:02:52.548 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:52.548 + true 00:02:52.548 + for D in $DRIVERS 00:02:52.548 + sudo modprobe ice 00:02:52.548 + exit 0 00:02:52.557 [Pipeline] } 00:02:52.571 [Pipeline] // withEnv 00:02:52.576 [Pipeline] } 00:02:52.589 [Pipeline] // stage 00:02:52.598 [Pipeline] catchError 00:02:52.600 [Pipeline] { 00:02:52.613 [Pipeline] timeout 00:02:52.613 Timeout set to expire in 1 hr 0 min 00:02:52.615 [Pipeline] { 00:02:52.628 [Pipeline] stage 00:02:52.630 [Pipeline] { (Tests) 00:02:52.643 [Pipeline] sh 00:02:52.927 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:52.927 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:52.927 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:52.927 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:52.927 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.927 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:52.927 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:52.927 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:52.927 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:52.927 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:52.927 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:52.927 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:52.927 + source /etc/os-release 00:02:52.927 ++ NAME='Fedora Linux' 00:02:52.927 ++ VERSION='39 (Cloud Edition)' 00:02:52.927 ++ ID=fedora 00:02:52.927 ++ VERSION_ID=39 00:02:52.927 ++ VERSION_CODENAME= 00:02:52.927 ++ PLATFORM_ID=platform:f39 00:02:52.927 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:52.927 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:52.927 ++ LOGO=fedora-logo-icon 00:02:52.927 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:52.927 ++ HOME_URL=https://fedoraproject.org/ 00:02:52.927 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:52.927 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:52.927 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:52.927 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:52.927 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:52.927 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:52.927 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:52.927 ++ SUPPORT_END=2024-11-12 00:02:52.927 ++ VARIANT='Cloud Edition' 00:02:52.927 ++ VARIANT_ID=cloud 00:02:52.927 + uname -a 00:02:52.927 Linux spdk-wfp-15 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:52.927 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:56.216 Hugepages 00:02:56.216 node hugesize free / total 00:02:56.216 node0 1048576kB 0 / 0 00:02:56.216 node0 2048kB 0 / 0 00:02:56.216 node1 1048576kB 0 / 0 00:02:56.216 node1 2048kB 0 / 0 00:02:56.216 00:02:56.216 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.216 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:56.216 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:56.216 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:56.216 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:56.216 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:56.216 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:56.217 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:56.217 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:56.217 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:02:56.217 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:56.217 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:56.217 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:56.217 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme3 nvme3n1 00:02:56.217 NVMe 0000:d9:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:02:56.217 + rm -f /tmp/spdk-ld-path 00:02:56.217 + source autorun-spdk.conf 00:02:56.217 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.217 ++ SPDK_TEST_NVMF=1 00:02:56.217 ++ SPDK_TEST_NVME_CLI=1 00:02:56.217 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.217 ++ SPDK_TEST_NVMF_NICS=e810 00:02:56.217 ++ SPDK_TEST_VFIOUSER=1 00:02:56.217 ++ SPDK_RUN_UBSAN=1 00:02:56.217 ++ NET_TYPE=phy 00:02:56.217 ++ RUN_NIGHTLY=0 00:02:56.217 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:56.217 + [[ -n '' ]] 00:02:56.217 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.217 + for M in /var/spdk/build-*-manifest.txt 00:02:56.217 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:56.217 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:56.217 + for M in /var/spdk/build-*-manifest.txt 00:02:56.217 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.217 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:56.217 + for M in /var/spdk/build-*-manifest.txt 00:02:56.217 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.217 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:56.217 ++ uname 00:02:56.217 + [[ Linux == \L\i\n\u\x ]] 00:02:56.217 + sudo dmesg -T 00:02:56.217 + sudo dmesg --clear 00:02:56.477 + dmesg_pid=629706 00:02:56.477 + [[ Fedora Linux == FreeBSD ]] 00:02:56.477 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.477 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.477 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.477 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:56.477 + sudo dmesg -Tw 00:02:56.477 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:56.477 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.477 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.477 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.477 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.477 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.477 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.477 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.477 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.477 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.477 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.477 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.477 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:56.477 12:17:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.477 12:17:02 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:56.477 12:17:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:56.478 12:17:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:56.478 12:17:02 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:56.478 12:17:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.478 12:17:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:56.478 12:17:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:56.478 12:17:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:56.478 12:17:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.478 12:17:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.478 12:17:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.478 12:17:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.478 12:17:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.478 12:17:02 -- paths/export.sh@5 -- $ export PATH 00:02:56.478 12:17:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.478 12:17:02 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:56.478 12:17:02 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:56.478 12:17:02 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101422.XXXXXX 00:02:56.478 12:17:02 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101422.gBT62F 00:02:56.478 12:17:02 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:56.478 12:17:02 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:56.478 12:17:02 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:56.478 12:17:02 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:56.478 12:17:02 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:56.478 12:17:02 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:56.478 12:17:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:56.478 12:17:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.478 12:17:02 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:56.478 12:17:02 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:56.478 12:17:02 -- pm/common@17 -- $ local monitor 00:02:56.478 12:17:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.478 12:17:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.478 12:17:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.478 12:17:02 -- pm/common@21 -- $ date +%s 00:02:56.478 12:17:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.478 12:17:02 -- pm/common@21 -- $ date +%s 00:02:56.478 12:17:02 -- pm/common@25 -- $ sleep 1 00:02:56.478 12:17:02 -- pm/common@21 -- $ date +%s 00:02:56.478 12:17:02 -- pm/common@21 -- $ date +%s 00:02:56.478 12:17:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101422 00:02:56.478 12:17:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101422 00:02:56.478 12:17:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101422 00:02:56.478 12:17:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101422 00:02:56.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101422_collect-cpu-load.pm.log 00:02:56.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101422_collect-vmstat.pm.log 00:02:56.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101422_collect-cpu-temp.pm.log 00:02:56.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101422_collect-bmc-pm.bmc.pm.log 00:02:57.416 12:17:03 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:57.416 12:17:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.416 12:17:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.416 12:17:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.416 12:17:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.416 Wed Nov 20 11:17:03 AM UTC 2024 00:02:57.416 12:17:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:57.674 v25.01-pre-214-gf86091626 00:02:57.674 12:17:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:57.674 12:17:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:57.674 12:17:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:57.674 12:17:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.674 12:17:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.674 12:17:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.674 ************************************ 00:02:57.674 START TEST ubsan 00:02:57.674 ************************************ 00:02:57.674 12:17:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:57.674 using ubsan 00:02:57.674 00:02:57.674 real 0m0.000s 00:02:57.674 user 0m0.000s 00:02:57.674 sys 0m0.000s 00:02:57.674 12:17:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.674 12:17:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.674 ************************************ 00:02:57.674 END TEST ubsan 00:02:57.674 ************************************ 00:02:57.674 12:17:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:57.675 12:17:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:57.675 12:17:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:57.675 12:17:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:57.675 12:17:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:57.675 12:17:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:57.675 12:17:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:57.675 12:17:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:57.675 12:17:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:57.675 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:57.675 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:58.242 Using 'verbs' RDMA provider 00:03:11.025 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:23.248 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:23.248 Creating mk/config.mk...done. 00:03:23.248 Creating mk/cc.flags.mk...done. 00:03:23.248 Type 'make' to build. 00:03:23.248 12:17:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:03:23.248 12:17:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:23.248 12:17:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:23.248 12:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.248 ************************************ 00:03:23.248 START TEST make 00:03:23.248 ************************************ 00:03:23.248 12:17:28 make -- common/autotest_common.sh@1129 -- $ make -j112 00:03:23.519 make[1]: Nothing to be done for 'all'. 00:03:24.974 The Meson build system 00:03:24.974 Version: 1.5.0 00:03:24.974 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:24.974 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:24.974 Build type: native build 00:03:24.974 Project name: libvfio-user 00:03:24.974 Project version: 0.0.1 00:03:24.974 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:24.974 C linker for the host machine: cc ld.bfd 2.40-14 00:03:24.974 Host machine cpu family: x86_64 00:03:24.974 Host machine cpu: x86_64 00:03:24.974 Run-time dependency threads found: YES 00:03:24.974 Library dl found: YES 00:03:24.974 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:24.974 Run-time dependency json-c found: YES 0.17 00:03:24.974 Run-time dependency cmocka found: YES 1.1.7 00:03:24.974 Program pytest-3 found: NO 00:03:24.974 Program flake8 found: NO 00:03:24.974 Program misspell-fixer found: NO 00:03:24.974 Program restructuredtext-lint found: NO 00:03:24.974 Program valgrind found: YES (/usr/bin/valgrind) 00:03:24.974 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:24.974 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:24.974 Compiler for C supports arguments -Wwrite-strings: YES 00:03:24.974 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:24.974 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:24.974 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:24.974 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:24.974 Build targets in project: 8 00:03:24.974 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:24.974 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:24.974 00:03:24.974 libvfio-user 0.0.1 00:03:24.974 00:03:24.974 User defined options 00:03:24.974 buildtype : debug 00:03:24.974 default_library: shared 00:03:24.974 libdir : /usr/local/lib 00:03:24.974 00:03:24.974 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:25.541 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:25.541 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:25.542 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:25.542 [3/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:25.542 [4/37] Compiling C object samples/null.p/null.c.o 00:03:25.542 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:25.542 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:25.542 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:25.542 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:25.542 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:25.542 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:25.542 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:25.542 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:25.542 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:25.542 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:25.542 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:25.542 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:25.542 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:25.542 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:25.542 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:25.542 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:25.542 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:25.542 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:25.542 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:25.542 [24/37] Compiling C object samples/server.p/server.c.o 00:03:25.542 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:25.542 [26/37] Compiling C object samples/client.p/client.c.o 00:03:25.542 [27/37] Linking target samples/client 00:03:25.542 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:25.542 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:25.800 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:25.800 [31/37] Linking target test/unit_tests 00:03:25.800 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:25.800 [33/37] Linking target samples/null 00:03:25.800 [34/37] Linking target samples/server 00:03:25.800 [35/37] Linking target samples/lspci 00:03:25.800 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:25.800 [37/37] Linking target samples/gpio-pci-idio-16 00:03:25.800 INFO: autodetecting backend as ninja 00:03:25.800 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:26.059 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:26.318 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:26.318 ninja: no work to do. 00:03:31.622 The Meson build system 00:03:31.622 Version: 1.5.0 00:03:31.622 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:31.622 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:31.622 Build type: native build 00:03:31.622 Program cat found: YES (/usr/bin/cat) 00:03:31.622 Project name: DPDK 00:03:31.622 Project version: 24.03.0 00:03:31.622 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.622 C linker for the host machine: cc ld.bfd 2.40-14 00:03:31.622 Host machine cpu family: x86_64 00:03:31.622 Host machine cpu: x86_64 00:03:31.622 Message: ## Building in Developer Mode ## 00:03:31.622 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:31.622 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:31.622 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:31.622 Program python3 found: YES (/usr/bin/python3) 00:03:31.622 Program cat found: YES (/usr/bin/cat) 00:03:31.622 Compiler for C supports arguments -march=native: YES 00:03:31.622 Checking for size of "void *" : 8 00:03:31.622 Checking for size of "void *" : 8 (cached) 00:03:31.622 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:31.622 Library m found: YES 00:03:31.622 Library numa found: YES 00:03:31.622 Has header "numaif.h" : YES 00:03:31.622 Library fdt found: NO 00:03:31.622 Library execinfo found: NO 00:03:31.622 Has header "execinfo.h" : YES 00:03:31.622 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.622 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:31.622 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:31.622 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:31.622 Run-time dependency openssl found: YES 3.1.1 00:03:31.622 Run-time dependency libpcap found: YES 1.10.4 00:03:31.622 Has header "pcap.h" with dependency libpcap: YES 00:03:31.622 Compiler for C supports arguments -Wcast-qual: YES 00:03:31.622 Compiler for C supports arguments -Wdeprecated: YES 00:03:31.622 Compiler for C supports arguments -Wformat: YES 00:03:31.622 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:31.622 Compiler for C supports arguments -Wformat-security: NO 00:03:31.622 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.622 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:31.622 Compiler for C supports arguments -Wnested-externs: YES 00:03:31.622 Compiler for C supports arguments -Wold-style-definition: YES 00:03:31.622 Compiler for C supports arguments -Wpointer-arith: YES 00:03:31.622 Compiler for C supports arguments -Wsign-compare: YES 00:03:31.622 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:31.622 Compiler for C supports arguments -Wundef: YES 00:03:31.622 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.622 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:31.622 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:31.622 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.622 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:31.622 Program objdump found: YES (/usr/bin/objdump) 00:03:31.622 Compiler for C supports arguments -mavx512f: YES 00:03:31.622 Checking if "AVX512 checking" compiles: YES 00:03:31.622 Fetching value of define "__SSE4_2__" : 1 00:03:31.622 Fetching value of define "__AES__" : 1 00:03:31.622 Fetching value of define "__AVX__" : 1 00:03:31.623 Fetching value of define "__AVX2__" : 1 00:03:31.623 Fetching value of define "__AVX512BW__" : 1 00:03:31.623 Fetching value of define "__AVX512CD__" : 1 00:03:31.623 Fetching value of define "__AVX512DQ__" : 1 00:03:31.623 Fetching value of define "__AVX512F__" : 1 00:03:31.623 Fetching value of define "__AVX512VL__" : 1 00:03:31.623 Fetching value of define "__PCLMUL__" : 1 00:03:31.623 Fetching value of define "__RDRND__" : 1 00:03:31.623 Fetching value of define "__RDSEED__" : 1 00:03:31.623 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:31.623 Fetching value of define "__znver1__" : (undefined) 00:03:31.623 Fetching value of define "__znver2__" : (undefined) 00:03:31.623 Fetching value of define "__znver3__" : (undefined) 00:03:31.623 Fetching value of define "__znver4__" : (undefined) 00:03:31.623 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:31.623 Message: lib/log: Defining dependency "log" 00:03:31.623 Message: lib/kvargs: Defining dependency "kvargs" 00:03:31.623 Message: lib/telemetry: Defining dependency "telemetry" 00:03:31.623 Checking for function "getentropy" : NO 00:03:31.623 Message: lib/eal: Defining dependency "eal" 00:03:31.623 Message: lib/ring: Defining dependency "ring" 00:03:31.623 Message: lib/rcu: Defining dependency "rcu" 00:03:31.623 Message: lib/mempool: Defining dependency "mempool" 00:03:31.623 Message: lib/mbuf: Defining dependency "mbuf" 00:03:31.623 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:31.623 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:31.623 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:31.623 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:31.623 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:31.623 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:31.623 Compiler for C supports arguments -mpclmul: YES 00:03:31.623 Compiler for C supports arguments -maes: YES 00:03:31.623 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:31.623 Compiler for C supports arguments -mavx512bw: YES 00:03:31.623 Compiler for C supports arguments -mavx512dq: YES 00:03:31.623 Compiler for C supports arguments -mavx512vl: YES 00:03:31.623 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:31.623 Compiler for C supports arguments -mavx2: YES 00:03:31.623 Compiler for C supports arguments -mavx: YES 00:03:31.623 Message: lib/net: Defining dependency "net" 00:03:31.623 Message: lib/meter: Defining dependency "meter" 00:03:31.623 Message: lib/ethdev: Defining dependency "ethdev" 00:03:31.623 Message: lib/pci: Defining dependency "pci" 00:03:31.623 Message: lib/cmdline: Defining dependency "cmdline" 00:03:31.623 Message: lib/hash: Defining dependency "hash" 00:03:31.623 Message: lib/timer: Defining dependency "timer" 00:03:31.623 Message: lib/compressdev: Defining dependency "compressdev" 00:03:31.623 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:31.623 Message: lib/dmadev: Defining dependency "dmadev" 00:03:31.623 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:31.623 Message: lib/power: Defining dependency "power" 00:03:31.623 Message: lib/reorder: Defining dependency "reorder" 00:03:31.623 Message: lib/security: Defining dependency "security" 00:03:31.623 Has header "linux/userfaultfd.h" : YES 00:03:31.623 Has header "linux/vduse.h" : YES 00:03:31.623 Message: lib/vhost: Defining dependency "vhost" 00:03:31.623 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:31.623 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:31.623 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:31.623 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:31.623 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:31.623 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:31.623 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:31.623 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:31.623 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:31.623 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:31.623 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:31.623 Configuring doxy-api-html.conf using configuration 00:03:31.623 Configuring doxy-api-man.conf using configuration 00:03:31.623 Program mandb found: YES (/usr/bin/mandb) 00:03:31.623 Program sphinx-build found: NO 00:03:31.623 Configuring rte_build_config.h using configuration 00:03:31.623 Message: 00:03:31.623 ================= 00:03:31.623 Applications Enabled 00:03:31.623 ================= 00:03:31.623 00:03:31.623 apps: 00:03:31.623 00:03:31.623 00:03:31.623 Message: 00:03:31.623 ================= 00:03:31.623 Libraries Enabled 00:03:31.623 ================= 00:03:31.623 00:03:31.623 libs: 00:03:31.623 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:31.623 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:31.623 cryptodev, dmadev, power, reorder, security, vhost, 00:03:31.623 00:03:31.623 Message: 00:03:31.623 =============== 00:03:31.623 Drivers Enabled 00:03:31.623 =============== 00:03:31.623 00:03:31.623 common: 00:03:31.623 00:03:31.623 bus: 00:03:31.623 pci, vdev, 00:03:31.623 mempool: 00:03:31.623 ring, 00:03:31.623 dma: 00:03:31.623 00:03:31.623 net: 00:03:31.623 00:03:31.623 crypto: 00:03:31.623 00:03:31.623 compress: 00:03:31.623 00:03:31.623 vdpa: 00:03:31.623 00:03:31.623 00:03:31.623 Message: 00:03:31.623 ================= 00:03:31.623 Content Skipped 00:03:31.623 ================= 00:03:31.623 00:03:31.623 apps: 00:03:31.623 dumpcap: explicitly disabled via build config 00:03:31.623 graph: explicitly disabled via build config 00:03:31.623 pdump: explicitly disabled via build config 00:03:31.623 proc-info: explicitly disabled via build config 00:03:31.623 test-acl: explicitly disabled via build config 00:03:31.623 test-bbdev: explicitly disabled via build config 00:03:31.623 test-cmdline: explicitly disabled via build config 00:03:31.623 test-compress-perf: explicitly disabled via build config 00:03:31.623 test-crypto-perf: explicitly disabled via build config 00:03:31.623 test-dma-perf: explicitly disabled via build config 00:03:31.623 test-eventdev: explicitly disabled via build config 00:03:31.623 test-fib: explicitly disabled via build config 00:03:31.623 test-flow-perf: explicitly disabled via build config 00:03:31.623 test-gpudev: explicitly disabled via build config 00:03:31.623 test-mldev: explicitly disabled via build config 00:03:31.623 test-pipeline: explicitly disabled via build config 00:03:31.623 test-pmd: explicitly disabled via build config 00:03:31.623 test-regex: explicitly disabled via build config 00:03:31.623 test-sad: explicitly disabled via build config 00:03:31.623 test-security-perf: explicitly disabled via build config 00:03:31.623 00:03:31.623 libs: 00:03:31.623 argparse: explicitly disabled via build config 00:03:31.623 metrics: explicitly disabled via build config 00:03:31.623 acl: explicitly disabled via build config 00:03:31.623 bbdev: explicitly disabled via build config 00:03:31.623 bitratestats: explicitly disabled via build config 00:03:31.623 bpf: explicitly disabled via build config 00:03:31.623 cfgfile: explicitly disabled via build config 00:03:31.623 distributor: explicitly disabled via build config 00:03:31.623 efd: explicitly disabled via build config 00:03:31.623 eventdev: explicitly disabled via build config 00:03:31.623 dispatcher: explicitly disabled via build config 00:03:31.623 gpudev: explicitly disabled via build config 00:03:31.623 gro: explicitly disabled via build config 00:03:31.623 gso: explicitly disabled via build config 00:03:31.623 ip_frag: explicitly disabled via build config 00:03:31.623 jobstats: explicitly disabled via build config 00:03:31.623 latencystats: explicitly disabled via build config 00:03:31.623 lpm: explicitly disabled via build config 00:03:31.623 member: explicitly disabled via build config 00:03:31.623 pcapng: explicitly disabled via build config 00:03:31.623 rawdev: explicitly disabled via build config 00:03:31.623 regexdev: explicitly disabled via build config 00:03:31.623 mldev: explicitly disabled via build config 00:03:31.623 rib: explicitly disabled via build config 00:03:31.623 sched: explicitly disabled via build config 00:03:31.623 stack: explicitly disabled via build config 00:03:31.623 ipsec: explicitly disabled via build config 00:03:31.623 pdcp: explicitly disabled via build config 00:03:31.623 fib: explicitly disabled via build config 00:03:31.623 port: explicitly disabled via build config 00:03:31.623 pdump: explicitly disabled via build config 00:03:31.623 table: explicitly disabled via build config 00:03:31.623 pipeline: explicitly disabled via build config 00:03:31.623 graph: explicitly disabled via build config 00:03:31.623 node: explicitly disabled via build config 00:03:31.623 00:03:31.623 drivers: 00:03:31.623 common/cpt: not in enabled drivers build config 00:03:31.624 common/dpaax: not in enabled drivers build config 00:03:31.624 common/iavf: not in enabled drivers build config 00:03:31.624 common/idpf: not in enabled drivers build config 00:03:31.624 common/ionic: not in enabled drivers build config 00:03:31.624 common/mvep: not in enabled drivers build config 00:03:31.624 common/octeontx: not in enabled drivers build config 00:03:31.624 bus/auxiliary: not in enabled drivers build config 00:03:31.624 bus/cdx: not in enabled drivers build config 00:03:31.624 bus/dpaa: not in enabled drivers build config 00:03:31.624 bus/fslmc: not in enabled drivers build config 00:03:31.624 bus/ifpga: not in enabled drivers build config 00:03:31.624 bus/platform: not in enabled drivers build config 00:03:31.624 bus/uacce: not in enabled drivers build config 00:03:31.624 bus/vmbus: not in enabled drivers build config 00:03:31.624 common/cnxk: not in enabled drivers build config 00:03:31.624 common/mlx5: not in enabled drivers build config 00:03:31.624 common/nfp: not in enabled drivers build config 00:03:31.624 common/nitrox: not in enabled drivers build config 00:03:31.624 common/qat: not in enabled drivers build config 00:03:31.624 common/sfc_efx: not in enabled drivers build config 00:03:31.624 mempool/bucket: not in enabled drivers build config 00:03:31.624 mempool/cnxk: not in enabled drivers build config 00:03:31.624 mempool/dpaa: not in enabled drivers build config 00:03:31.624 mempool/dpaa2: not in enabled drivers build config 00:03:31.624 mempool/octeontx: not in enabled drivers build config 00:03:31.624 mempool/stack: not in enabled drivers build config 00:03:31.624 dma/cnxk: not in enabled drivers build config 00:03:31.624 dma/dpaa: not in enabled drivers build config 00:03:31.624 dma/dpaa2: not in enabled drivers build config 00:03:31.624 dma/hisilicon: not in enabled drivers build config 00:03:31.624 dma/idxd: not in enabled drivers build config 00:03:31.624 dma/ioat: not in enabled drivers build config 00:03:31.624 dma/skeleton: not in enabled drivers build config 00:03:31.624 net/af_packet: not in enabled drivers build config 00:03:31.624 net/af_xdp: not in enabled drivers build config 00:03:31.624 net/ark: not in enabled drivers build config 00:03:31.624 net/atlantic: not in enabled drivers build config 00:03:31.624 net/avp: not in enabled drivers build config 00:03:31.624 net/axgbe: not in enabled drivers build config 00:03:31.624 net/bnx2x: not in enabled drivers build config 00:03:31.624 net/bnxt: not in enabled drivers build config 00:03:31.624 net/bonding: not in enabled drivers build config 00:03:31.624 net/cnxk: not in enabled drivers build config 00:03:31.624 net/cpfl: not in enabled drivers build config 00:03:31.624 net/cxgbe: not in enabled drivers build config 00:03:31.624 net/dpaa: not in enabled drivers build config 00:03:31.624 net/dpaa2: not in enabled drivers build config 00:03:31.624 net/e1000: not in enabled drivers build config 00:03:31.624 net/ena: not in enabled drivers build config 00:03:31.624 net/enetc: not in enabled drivers build config 00:03:31.624 net/enetfec: not in enabled drivers build config 00:03:31.624 net/enic: not in enabled drivers build config 00:03:31.624 net/failsafe: not in enabled drivers build config 00:03:31.624 net/fm10k: not in enabled drivers build config 00:03:31.624 net/gve: not in enabled drivers build config 00:03:31.624 net/hinic: not in enabled drivers build config 00:03:31.624 net/hns3: not in enabled drivers build config 00:03:31.624 net/i40e: not in enabled drivers build config 00:03:31.624 net/iavf: not in enabled drivers build config 00:03:31.624 net/ice: not in enabled drivers build config 00:03:31.624 net/idpf: not in enabled drivers build config 00:03:31.624 net/igc: not in enabled drivers build config 00:03:31.624 net/ionic: not in enabled drivers build config 00:03:31.624 net/ipn3ke: not in enabled drivers build config 00:03:31.624 net/ixgbe: not in enabled drivers build config 00:03:31.624 net/mana: not in enabled drivers build config 00:03:31.624 net/memif: not in enabled drivers build config 00:03:31.624 net/mlx4: not in enabled drivers build config 00:03:31.624 net/mlx5: not in enabled drivers build config 00:03:31.624 net/mvneta: not in enabled drivers build config 00:03:31.624 net/mvpp2: not in enabled drivers build config 00:03:31.624 net/netvsc: not in enabled drivers build config 00:03:31.624 net/nfb: not in enabled drivers build config 00:03:31.624 net/nfp: not in enabled drivers build config 00:03:31.624 net/ngbe: not in enabled drivers build config 00:03:31.624 net/null: not in enabled drivers build config 00:03:31.624 net/octeontx: not in enabled drivers build config 00:03:31.624 net/octeon_ep: not in enabled drivers build config 00:03:31.624 net/pcap: not in enabled drivers build config 00:03:31.624 net/pfe: not in enabled drivers build config 00:03:31.624 net/qede: not in enabled drivers build config 00:03:31.624 net/ring: not in enabled drivers build config 00:03:31.624 net/sfc: not in enabled drivers build config 00:03:31.624 net/softnic: not in enabled drivers build config 00:03:31.624 net/tap: not in enabled drivers build config 00:03:31.624 net/thunderx: not in enabled drivers build config 00:03:31.624 net/txgbe: not in enabled drivers build config 00:03:31.624 net/vdev_netvsc: not in enabled drivers build config 00:03:31.624 net/vhost: not in enabled drivers build config 00:03:31.624 net/virtio: not in enabled drivers build config 00:03:31.624 net/vmxnet3: not in enabled drivers build config 00:03:31.624 raw/*: missing internal dependency, "rawdev" 00:03:31.624 crypto/armv8: not in enabled drivers build config 00:03:31.624 crypto/bcmfs: not in enabled drivers build config 00:03:31.624 crypto/caam_jr: not in enabled drivers build config 00:03:31.624 crypto/ccp: not in enabled drivers build config 00:03:31.624 crypto/cnxk: not in enabled drivers build config 00:03:31.624 crypto/dpaa_sec: not in enabled drivers build config 00:03:31.624 crypto/dpaa2_sec: not in enabled drivers build config 00:03:31.624 crypto/ipsec_mb: not in enabled drivers build config 00:03:31.624 crypto/mlx5: not in enabled drivers build config 00:03:31.624 crypto/mvsam: not in enabled drivers build config 00:03:31.624 crypto/nitrox: not in enabled drivers build config 00:03:31.624 crypto/null: not in enabled drivers build config 00:03:31.624 crypto/octeontx: not in enabled drivers build config 00:03:31.624 crypto/openssl: not in enabled drivers build config 00:03:31.624 crypto/scheduler: not in enabled drivers build config 00:03:31.624 crypto/uadk: not in enabled drivers build config 00:03:31.624 crypto/virtio: not in enabled drivers build config 00:03:31.624 compress/isal: not in enabled drivers build config 00:03:31.624 compress/mlx5: not in enabled drivers build config 00:03:31.624 compress/nitrox: not in enabled drivers build config 00:03:31.624 compress/octeontx: not in enabled drivers build config 00:03:31.624 compress/zlib: not in enabled drivers build config 00:03:31.624 regex/*: missing internal dependency, "regexdev" 00:03:31.624 ml/*: missing internal dependency, "mldev" 00:03:31.624 vdpa/ifc: not in enabled drivers build config 00:03:31.624 vdpa/mlx5: not in enabled drivers build config 00:03:31.624 vdpa/nfp: not in enabled drivers build config 00:03:31.624 vdpa/sfc: not in enabled drivers build config 00:03:31.624 event/*: missing internal dependency, "eventdev" 00:03:31.624 baseband/*: missing internal dependency, "bbdev" 00:03:31.624 gpu/*: missing internal dependency, "gpudev" 00:03:31.624 00:03:31.624 00:03:31.624 Build targets in project: 85 00:03:31.624 00:03:31.624 DPDK 24.03.0 00:03:31.624 00:03:31.624 User defined options 00:03:31.624 buildtype : debug 00:03:31.624 default_library : shared 00:03:31.624 libdir : lib 00:03:31.624 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:31.624 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:31.624 c_link_args : 00:03:31.624 cpu_instruction_set: native 00:03:31.624 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:31.625 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:31.625 enable_docs : false 00:03:31.625 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:31.625 enable_kmods : false 00:03:31.625 max_lcores : 128 00:03:31.625 tests : false 00:03:31.625 00:03:31.625 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:31.897 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:31.897 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:32.162 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:32.162 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:32.162 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:32.162 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:32.162 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:32.162 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:32.162 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:32.162 [9/268] Linking static target lib/librte_kvargs.a 00:03:32.162 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:32.162 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:32.162 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:32.162 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:32.162 [14/268] Linking static target lib/librte_log.a 00:03:32.162 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:32.162 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:32.162 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:32.162 [18/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:32.162 [19/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:32.162 [20/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:32.162 [21/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:32.162 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:32.162 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:32.162 [24/268] Linking static target lib/librte_pci.a 00:03:32.162 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:32.162 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:32.162 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:32.162 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:32.424 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:32.424 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:32.424 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:32.424 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:32.424 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:32.424 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:32.424 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:32.682 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:32.682 [37/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:32.682 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:32.682 [39/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:32.682 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:32.682 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:32.682 [42/268] Linking static target lib/librte_meter.a 00:03:32.682 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:32.682 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:32.682 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:32.682 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:32.682 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:32.682 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:32.682 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:32.682 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:32.682 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:32.682 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:32.682 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:32.682 [54/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:32.682 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:32.682 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:32.682 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:32.682 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:32.682 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:32.682 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:32.682 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:32.682 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:32.682 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:32.682 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:32.682 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:32.682 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:32.682 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:32.682 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:32.682 [69/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:32.682 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:32.682 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:32.682 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:32.682 [73/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.682 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:32.682 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:32.682 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:32.683 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:32.683 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:32.683 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:32.683 [80/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:32.683 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:32.683 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:32.683 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:32.683 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:32.683 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:32.683 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:32.683 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:32.683 [88/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:32.683 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:32.683 [90/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:32.683 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:32.683 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:32.683 [93/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.683 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:32.683 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:32.683 [96/268] Linking static target lib/librte_cmdline.a 00:03:32.683 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:32.683 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:32.683 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:32.683 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:32.683 [101/268] Linking static target lib/librte_ring.a 00:03:32.683 [102/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.683 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:32.683 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:32.683 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:32.683 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:32.683 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:32.683 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:32.683 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:32.683 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:32.683 [111/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:32.683 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.683 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:32.683 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:32.683 [115/268] Linking static target lib/librte_telemetry.a 00:03:32.683 [116/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:32.683 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:32.683 [118/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:32.683 [119/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:32.683 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:32.683 [121/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:32.683 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:32.683 [123/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:32.683 [124/268] Linking static target lib/librte_net.a 00:03:32.683 [125/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:32.941 [126/268] Linking static target lib/librte_timer.a 00:03:32.941 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:32.941 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:32.941 [129/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:32.941 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:32.941 [131/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:32.941 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:32.941 [133/268] Linking static target lib/librte_rcu.a 00:03:32.941 [134/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:32.941 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:32.941 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:32.941 [137/268] Linking static target lib/librte_eal.a 00:03:32.941 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:32.941 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:32.941 [140/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:32.941 [141/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:32.941 [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.941 [143/268] Linking static target lib/librte_mempool.a 00:03:32.941 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:32.941 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:32.941 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:32.941 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:32.942 [148/268] Linking static target lib/librte_compressdev.a 00:03:32.942 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:32.942 [150/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:32.942 [151/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.942 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:32.942 [153/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:32.942 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:32.942 [155/268] Linking static target lib/librte_mbuf.a 00:03:32.942 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:32.942 [157/268] Linking static target lib/librte_dmadev.a 00:03:32.942 [158/268] Linking target lib/librte_log.so.24.1 00:03:32.942 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:32.942 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:32.942 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:32.942 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:32.942 [163/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.942 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:32.942 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:33.201 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:33.201 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:33.201 [168/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.201 [169/268] Linking static target lib/librte_reorder.a 00:03:33.201 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:33.201 [171/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:33.201 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:33.201 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:33.201 [174/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.201 [175/268] Linking target lib/librte_kvargs.so.24.1 00:03:33.201 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:33.201 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:33.201 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:33.201 [179/268] Linking static target lib/librte_power.a 00:03:33.201 [180/268] Linking static target lib/librte_security.a 00:03:33.201 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:33.201 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.201 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:33.201 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:33.201 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:33.201 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:33.201 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:33.201 [188/268] Linking static target lib/librte_hash.a 00:03:33.201 [189/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.201 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:33.201 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:33.201 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:33.201 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:33.201 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:33.201 [195/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.201 [196/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:33.201 [197/268] Linking target lib/librte_telemetry.so.24.1 00:03:33.461 [198/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:33.461 [199/268] Linking static target lib/librte_cryptodev.a 00:03:33.461 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:33.461 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:33.461 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:33.461 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:33.461 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.461 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.461 [206/268] Linking static target drivers/librte_bus_vdev.a 00:03:33.461 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.461 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.461 [209/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:33.461 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.461 [211/268] Linking static target drivers/librte_mempool_ring.a 00:03:33.461 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.461 [213/268] Linking static target drivers/librte_bus_pci.a 00:03:33.461 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.721 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.721 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.721 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.721 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.721 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:33.721 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.721 [221/268] Linking static target lib/librte_ethdev.a 00:03:33.721 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.979 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.979 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:33.979 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.238 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.238 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.174 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:35.174 [229/268] Linking static target lib/librte_vhost.a 00:03:35.174 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.077 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.348 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.916 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.174 [234/268] Linking target lib/librte_eal.so.24.1 00:03:43.174 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:43.174 [236/268] Linking target lib/librte_ring.so.24.1 00:03:43.174 [237/268] Linking target lib/librte_timer.so.24.1 00:03:43.174 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:43.174 [239/268] Linking target lib/librte_meter.so.24.1 00:03:43.174 [240/268] Linking target lib/librte_pci.so.24.1 00:03:43.174 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:43.433 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:43.433 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:43.433 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:43.433 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:43.433 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:43.433 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:43.433 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:43.433 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:43.433 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:43.691 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:43.691 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:43.691 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:43.691 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:43.691 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:43.691 [256/268] Linking target lib/librte_net.so.24.1 00:03:43.691 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:43.691 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:43.949 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:43.949 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:43.949 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:43.949 [262/268] Linking target lib/librte_security.so.24.1 00:03:43.949 [263/268] Linking target lib/librte_hash.so.24.1 00:03:43.949 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:44.207 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:44.207 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:44.207 [267/268] Linking target lib/librte_power.so.24.1 00:03:44.207 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:44.207 INFO: autodetecting backend as ninja 00:03:44.207 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:03:52.321 CC lib/ut/ut.o 00:03:52.321 CC lib/log/log.o 00:03:52.321 CC lib/log/log_flags.o 00:03:52.321 CC lib/log/log_deprecated.o 00:03:52.322 CC lib/ut_mock/mock.o 00:03:52.582 LIB libspdk_ut.a 00:03:52.582 LIB libspdk_log.a 00:03:52.582 LIB libspdk_ut_mock.a 00:03:52.582 SO libspdk_ut.so.2.0 00:03:52.582 SO libspdk_log.so.7.1 00:03:52.582 SO libspdk_ut_mock.so.6.0 00:03:52.582 SYMLINK libspdk_log.so 00:03:52.582 SYMLINK libspdk_ut.so 00:03:52.582 SYMLINK libspdk_ut_mock.so 00:03:53.149 CXX lib/trace_parser/trace.o 00:03:53.149 CC lib/dma/dma.o 00:03:53.149 CC lib/ioat/ioat.o 00:03:53.149 CC lib/util/base64.o 00:03:53.149 CC lib/util/bit_array.o 00:03:53.149 CC lib/util/cpuset.o 00:03:53.149 CC lib/util/crc16.o 00:03:53.149 CC lib/util/crc32.o 00:03:53.149 CC lib/util/crc32c.o 00:03:53.149 CC lib/util/crc32_ieee.o 00:03:53.149 CC lib/util/crc64.o 00:03:53.149 CC lib/util/dif.o 00:03:53.149 CC lib/util/fd.o 00:03:53.149 CC lib/util/fd_group.o 00:03:53.149 CC lib/util/file.o 00:03:53.149 CC lib/util/hexlify.o 00:03:53.149 CC lib/util/iov.o 00:03:53.149 CC lib/util/math.o 00:03:53.149 CC lib/util/net.o 00:03:53.149 CC lib/util/pipe.o 00:03:53.149 CC lib/util/strerror_tls.o 00:03:53.149 CC lib/util/string.o 00:03:53.149 CC lib/util/uuid.o 00:03:53.149 CC lib/util/xor.o 00:03:53.149 CC lib/util/zipf.o 00:03:53.149 CC lib/util/md5.o 00:03:53.149 CC lib/vfio_user/host/vfio_user_pci.o 00:03:53.149 CC lib/vfio_user/host/vfio_user.o 00:03:53.149 LIB libspdk_dma.a 00:03:53.149 SO libspdk_dma.so.5.0 00:03:53.149 SYMLINK libspdk_dma.so 00:03:53.149 LIB libspdk_ioat.a 00:03:53.408 SO libspdk_ioat.so.7.0 00:03:53.408 SYMLINK libspdk_ioat.so 00:03:53.408 LIB libspdk_vfio_user.a 00:03:53.408 SO libspdk_vfio_user.so.5.0 00:03:53.408 LIB libspdk_util.a 00:03:53.408 SYMLINK libspdk_vfio_user.so 00:03:53.408 SO libspdk_util.so.10.1 00:03:53.666 SYMLINK libspdk_util.so 00:03:53.666 LIB libspdk_trace_parser.a 00:03:53.666 SO libspdk_trace_parser.so.6.0 00:03:53.666 SYMLINK libspdk_trace_parser.so 00:03:53.925 CC lib/env_dpdk/env.o 00:03:53.925 CC lib/conf/conf.o 00:03:53.925 CC lib/idxd/idxd.o 00:03:53.925 CC lib/env_dpdk/memory.o 00:03:53.925 CC lib/env_dpdk/pci.o 00:03:53.925 CC lib/idxd/idxd_user.o 00:03:53.925 CC lib/env_dpdk/init.o 00:03:53.925 CC lib/idxd/idxd_kernel.o 00:03:53.925 CC lib/env_dpdk/threads.o 00:03:53.925 CC lib/rdma_utils/rdma_utils.o 00:03:53.925 CC lib/env_dpdk/pci_ioat.o 00:03:53.925 CC lib/json/json_parse.o 00:03:53.925 CC lib/vmd/vmd.o 00:03:53.925 CC lib/env_dpdk/pci_virtio.o 00:03:53.925 CC lib/env_dpdk/pci_vmd.o 00:03:53.925 CC lib/json/json_util.o 00:03:53.925 CC lib/vmd/led.o 00:03:53.925 CC lib/env_dpdk/pci_idxd.o 00:03:53.925 CC lib/json/json_write.o 00:03:53.925 CC lib/env_dpdk/pci_event.o 00:03:53.925 CC lib/env_dpdk/sigbus_handler.o 00:03:53.925 CC lib/env_dpdk/pci_dpdk.o 00:03:53.925 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:53.925 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.184 LIB libspdk_conf.a 00:03:54.184 SO libspdk_conf.so.6.0 00:03:54.184 LIB libspdk_rdma_utils.a 00:03:54.184 LIB libspdk_json.a 00:03:54.184 SO libspdk_rdma_utils.so.1.0 00:03:54.184 SYMLINK libspdk_conf.so 00:03:54.184 SO libspdk_json.so.6.0 00:03:54.184 SYMLINK libspdk_rdma_utils.so 00:03:54.444 SYMLINK libspdk_json.so 00:03:54.444 LIB libspdk_idxd.a 00:03:54.444 SO libspdk_idxd.so.12.1 00:03:54.444 LIB libspdk_vmd.a 00:03:54.444 SO libspdk_vmd.so.6.0 00:03:54.444 SYMLINK libspdk_idxd.so 00:03:54.444 SYMLINK libspdk_vmd.so 00:03:54.703 CC lib/rdma_provider/common.o 00:03:54.703 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:54.703 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.703 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.703 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.703 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.703 LIB libspdk_rdma_provider.a 00:03:54.703 SO libspdk_rdma_provider.so.7.0 00:03:54.963 LIB libspdk_jsonrpc.a 00:03:54.963 SO libspdk_jsonrpc.so.6.0 00:03:54.963 SYMLINK libspdk_rdma_provider.so 00:03:54.963 LIB libspdk_env_dpdk.a 00:03:54.963 SYMLINK libspdk_jsonrpc.so 00:03:54.963 SO libspdk_env_dpdk.so.15.1 00:03:55.221 SYMLINK libspdk_env_dpdk.so 00:03:55.221 CC lib/rpc/rpc.o 00:03:55.480 LIB libspdk_rpc.a 00:03:55.480 SO libspdk_rpc.so.6.0 00:03:55.480 SYMLINK libspdk_rpc.so 00:03:55.739 CC lib/trace/trace.o 00:03:55.739 CC lib/keyring/keyring.o 00:03:55.739 CC lib/trace/trace_flags.o 00:03:55.739 CC lib/keyring/keyring_rpc.o 00:03:55.739 CC lib/trace/trace_rpc.o 00:03:55.739 CC lib/notify/notify.o 00:03:55.739 CC lib/notify/notify_rpc.o 00:03:55.997 LIB libspdk_notify.a 00:03:55.997 SO libspdk_notify.so.6.0 00:03:55.997 LIB libspdk_keyring.a 00:03:55.997 LIB libspdk_trace.a 00:03:55.997 SO libspdk_keyring.so.2.0 00:03:55.997 SYMLINK libspdk_notify.so 00:03:55.997 SO libspdk_trace.so.11.0 00:03:56.256 SYMLINK libspdk_keyring.so 00:03:56.256 SYMLINK libspdk_trace.so 00:03:56.514 CC lib/thread/thread.o 00:03:56.514 CC lib/thread/iobuf.o 00:03:56.514 CC lib/sock/sock.o 00:03:56.514 CC lib/sock/sock_rpc.o 00:03:56.772 LIB libspdk_sock.a 00:03:56.772 SO libspdk_sock.so.10.0 00:03:56.772 SYMLINK libspdk_sock.so 00:03:57.339 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:57.339 CC lib/nvme/nvme_ctrlr.o 00:03:57.339 CC lib/nvme/nvme_fabric.o 00:03:57.339 CC lib/nvme/nvme_ns_cmd.o 00:03:57.339 CC lib/nvme/nvme_ns.o 00:03:57.339 CC lib/nvme/nvme_pcie_common.o 00:03:57.339 CC lib/nvme/nvme_pcie.o 00:03:57.339 CC lib/nvme/nvme_qpair.o 00:03:57.339 CC lib/nvme/nvme.o 00:03:57.339 CC lib/nvme/nvme_quirks.o 00:03:57.339 CC lib/nvme/nvme_transport.o 00:03:57.339 CC lib/nvme/nvme_discovery.o 00:03:57.339 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:57.339 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:57.339 CC lib/nvme/nvme_tcp.o 00:03:57.339 CC lib/nvme/nvme_opal.o 00:03:57.339 CC lib/nvme/nvme_io_msg.o 00:03:57.339 CC lib/nvme/nvme_poll_group.o 00:03:57.339 CC lib/nvme/nvme_zns.o 00:03:57.339 CC lib/nvme/nvme_stubs.o 00:03:57.339 CC lib/nvme/nvme_auth.o 00:03:57.339 CC lib/nvme/nvme_cuse.o 00:03:57.339 CC lib/nvme/nvme_rdma.o 00:03:57.339 CC lib/nvme/nvme_vfio_user.o 00:03:57.339 LIB libspdk_thread.a 00:03:57.596 SO libspdk_thread.so.11.0 00:03:57.596 SYMLINK libspdk_thread.so 00:03:57.855 CC lib/vfu_tgt/tgt_rpc.o 00:03:57.855 CC lib/vfu_tgt/tgt_endpoint.o 00:03:57.855 CC lib/fsdev/fsdev.o 00:03:57.855 CC lib/fsdev/fsdev_io.o 00:03:57.855 CC lib/fsdev/fsdev_rpc.o 00:03:57.855 CC lib/init/json_config.o 00:03:57.855 CC lib/init/subsystem.o 00:03:57.855 CC lib/init/subsystem_rpc.o 00:03:57.855 CC lib/init/rpc.o 00:03:57.855 CC lib/accel/accel.o 00:03:57.855 CC lib/accel/accel_rpc.o 00:03:57.855 CC lib/accel/accel_sw.o 00:03:57.855 CC lib/virtio/virtio.o 00:03:57.855 CC lib/virtio/virtio_vhost_user.o 00:03:57.855 CC lib/blob/blobstore.o 00:03:57.855 CC lib/virtio/virtio_vfio_user.o 00:03:57.855 CC lib/blob/request.o 00:03:57.855 CC lib/virtio/virtio_pci.o 00:03:57.855 CC lib/blob/zeroes.o 00:03:57.855 CC lib/blob/blob_bs_dev.o 00:03:58.114 LIB libspdk_init.a 00:03:58.114 SO libspdk_init.so.6.0 00:03:58.114 LIB libspdk_vfu_tgt.a 00:03:58.114 LIB libspdk_virtio.a 00:03:58.114 SO libspdk_vfu_tgt.so.3.0 00:03:58.114 SO libspdk_virtio.so.7.0 00:03:58.114 SYMLINK libspdk_init.so 00:03:58.114 SYMLINK libspdk_vfu_tgt.so 00:03:58.374 SYMLINK libspdk_virtio.so 00:03:58.374 LIB libspdk_fsdev.a 00:03:58.374 SO libspdk_fsdev.so.2.0 00:03:58.374 SYMLINK libspdk_fsdev.so 00:03:58.632 CC lib/event/app.o 00:03:58.632 CC lib/event/reactor.o 00:03:58.632 CC lib/event/log_rpc.o 00:03:58.632 CC lib/event/app_rpc.o 00:03:58.632 CC lib/event/scheduler_static.o 00:03:58.632 LIB libspdk_accel.a 00:03:58.632 SO libspdk_accel.so.16.0 00:03:58.632 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:58.632 SYMLINK libspdk_accel.so 00:03:58.891 LIB libspdk_nvme.a 00:03:58.891 LIB libspdk_event.a 00:03:58.891 SO libspdk_nvme.so.15.0 00:03:58.891 SO libspdk_event.so.14.0 00:03:58.891 SYMLINK libspdk_event.so 00:03:59.150 SYMLINK libspdk_nvme.so 00:03:59.150 CC lib/bdev/bdev.o 00:03:59.150 CC lib/bdev/bdev_rpc.o 00:03:59.150 CC lib/bdev/bdev_zone.o 00:03:59.150 CC lib/bdev/part.o 00:03:59.150 CC lib/bdev/scsi_nvme.o 00:03:59.150 LIB libspdk_fuse_dispatcher.a 00:03:59.150 SO libspdk_fuse_dispatcher.so.1.0 00:03:59.409 SYMLINK libspdk_fuse_dispatcher.so 00:03:59.977 LIB libspdk_blob.a 00:03:59.977 SO libspdk_blob.so.11.0 00:03:59.977 SYMLINK libspdk_blob.so 00:04:00.236 CC lib/blobfs/blobfs.o 00:04:00.236 CC lib/blobfs/tree.o 00:04:00.236 CC lib/lvol/lvol.o 00:04:00.803 LIB libspdk_bdev.a 00:04:00.803 SO libspdk_bdev.so.17.0 00:04:00.803 SYMLINK libspdk_bdev.so 00:04:00.803 LIB libspdk_blobfs.a 00:04:00.803 SO libspdk_blobfs.so.10.0 00:04:01.062 LIB libspdk_lvol.a 00:04:01.062 SYMLINK libspdk_blobfs.so 00:04:01.062 SO libspdk_lvol.so.10.0 00:04:01.062 SYMLINK libspdk_lvol.so 00:04:01.062 CC lib/nbd/nbd.o 00:04:01.062 CC lib/nbd/nbd_rpc.o 00:04:01.062 CC lib/ublk/ublk.o 00:04:01.062 CC lib/ublk/ublk_rpc.o 00:04:01.062 CC lib/ftl/ftl_core.o 00:04:01.062 CC lib/nvmf/ctrlr.o 00:04:01.062 CC lib/ftl/ftl_init.o 00:04:01.062 CC lib/nvmf/ctrlr_discovery.o 00:04:01.062 CC lib/nvmf/ctrlr_bdev.o 00:04:01.062 CC lib/ftl/ftl_layout.o 00:04:01.062 CC lib/ftl/ftl_debug.o 00:04:01.062 CC lib/nvmf/subsystem.o 00:04:01.062 CC lib/ftl/ftl_io.o 00:04:01.062 CC lib/nvmf/nvmf.o 00:04:01.062 CC lib/ftl/ftl_sb.o 00:04:01.062 CC lib/nvmf/nvmf_rpc.o 00:04:01.062 CC lib/ftl/ftl_l2p.o 00:04:01.062 CC lib/nvmf/transport.o 00:04:01.062 CC lib/scsi/dev.o 00:04:01.062 CC lib/nvmf/tcp.o 00:04:01.062 CC lib/scsi/lun.o 00:04:01.062 CC lib/ftl/ftl_l2p_flat.o 00:04:01.062 CC lib/nvmf/stubs.o 00:04:01.062 CC lib/scsi/port.o 00:04:01.062 CC lib/ftl/ftl_nv_cache.o 00:04:01.062 CC lib/ftl/ftl_band.o 00:04:01.062 CC lib/scsi/scsi.o 00:04:01.062 CC lib/nvmf/mdns_server.o 00:04:01.062 CC lib/ftl/ftl_band_ops.o 00:04:01.062 CC lib/nvmf/vfio_user.o 00:04:01.062 CC lib/scsi/scsi_bdev.o 00:04:01.062 CC lib/ftl/ftl_writer.o 00:04:01.062 CC lib/scsi/scsi_pr.o 00:04:01.062 CC lib/nvmf/rdma.o 00:04:01.062 CC lib/scsi/scsi_rpc.o 00:04:01.062 CC lib/ftl/ftl_rq.o 00:04:01.062 CC lib/nvmf/auth.o 00:04:01.062 CC lib/scsi/task.o 00:04:01.062 CC lib/ftl/ftl_reloc.o 00:04:01.062 CC lib/ftl/ftl_l2p_cache.o 00:04:01.062 CC lib/ftl/ftl_p2l.o 00:04:01.062 CC lib/ftl/ftl_p2l_log.o 00:04:01.062 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.062 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.062 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:01.320 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:01.320 CC lib/ftl/utils/ftl_conf.o 00:04:01.320 CC lib/ftl/utils/ftl_mempool.o 00:04:01.320 CC lib/ftl/utils/ftl_md.o 00:04:01.320 CC lib/ftl/utils/ftl_bitmap.o 00:04:01.320 CC lib/ftl/utils/ftl_property.o 00:04:01.320 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:01.320 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:01.320 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:01.320 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:01.320 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:01.320 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:01.320 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:01.320 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:01.320 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:01.320 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:01.320 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:01.320 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:01.320 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:01.320 CC lib/ftl/base/ftl_base_dev.o 00:04:01.320 CC lib/ftl/base/ftl_base_bdev.o 00:04:01.320 CC lib/ftl/ftl_trace.o 00:04:01.887 LIB libspdk_nbd.a 00:04:01.887 LIB libspdk_scsi.a 00:04:01.887 SO libspdk_nbd.so.7.0 00:04:01.887 SO libspdk_scsi.so.9.0 00:04:01.887 SYMLINK libspdk_nbd.so 00:04:01.887 LIB libspdk_ublk.a 00:04:01.887 SYMLINK libspdk_scsi.so 00:04:01.887 SO libspdk_ublk.so.3.0 00:04:01.887 SYMLINK libspdk_ublk.so 00:04:02.173 LIB libspdk_ftl.a 00:04:02.173 CC lib/iscsi/conn.o 00:04:02.173 CC lib/iscsi/init_grp.o 00:04:02.173 CC lib/iscsi/iscsi.o 00:04:02.173 CC lib/iscsi/param.o 00:04:02.173 CC lib/iscsi/portal_grp.o 00:04:02.173 CC lib/iscsi/tgt_node.o 00:04:02.173 CC lib/vhost/vhost.o 00:04:02.173 CC lib/iscsi/iscsi_subsystem.o 00:04:02.173 CC lib/vhost/vhost_rpc.o 00:04:02.173 CC lib/iscsi/iscsi_rpc.o 00:04:02.173 CC lib/vhost/vhost_scsi.o 00:04:02.173 CC lib/iscsi/task.o 00:04:02.173 CC lib/vhost/vhost_blk.o 00:04:02.173 CC lib/vhost/rte_vhost_user.o 00:04:02.173 SO libspdk_ftl.so.9.0 00:04:02.432 SYMLINK libspdk_ftl.so 00:04:02.999 LIB libspdk_vhost.a 00:04:02.999 LIB libspdk_nvmf.a 00:04:02.999 SO libspdk_vhost.so.8.0 00:04:02.999 SO libspdk_nvmf.so.20.0 00:04:02.999 SYMLINK libspdk_vhost.so 00:04:02.999 LIB libspdk_iscsi.a 00:04:03.258 SO libspdk_iscsi.so.8.0 00:04:03.258 SYMLINK libspdk_nvmf.so 00:04:03.258 SYMLINK libspdk_iscsi.so 00:04:03.843 CC module/env_dpdk/env_dpdk_rpc.o 00:04:03.843 CC module/vfu_device/vfu_virtio.o 00:04:03.843 CC module/vfu_device/vfu_virtio_blk.o 00:04:03.843 CC module/vfu_device/vfu_virtio_scsi.o 00:04:03.843 CC module/vfu_device/vfu_virtio_rpc.o 00:04:03.843 CC module/vfu_device/vfu_virtio_fs.o 00:04:03.843 LIB libspdk_env_dpdk_rpc.a 00:04:03.843 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:03.843 CC module/keyring/linux/keyring.o 00:04:03.843 CC module/keyring/linux/keyring_rpc.o 00:04:03.843 CC module/keyring/file/keyring.o 00:04:03.843 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:03.843 CC module/keyring/file/keyring_rpc.o 00:04:03.843 CC module/accel/error/accel_error.o 00:04:03.843 CC module/accel/error/accel_error_rpc.o 00:04:03.843 CC module/sock/posix/posix.o 00:04:03.843 CC module/accel/ioat/accel_ioat.o 00:04:03.843 CC module/accel/ioat/accel_ioat_rpc.o 00:04:03.843 CC module/accel/iaa/accel_iaa.o 00:04:03.843 CC module/accel/dsa/accel_dsa.o 00:04:03.843 CC module/fsdev/aio/fsdev_aio.o 00:04:03.843 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:03.843 CC module/accel/dsa/accel_dsa_rpc.o 00:04:03.843 CC module/accel/iaa/accel_iaa_rpc.o 00:04:03.843 CC module/scheduler/gscheduler/gscheduler.o 00:04:03.843 SO libspdk_env_dpdk_rpc.so.6.0 00:04:03.843 CC module/fsdev/aio/linux_aio_mgr.o 00:04:03.843 CC module/blob/bdev/blob_bdev.o 00:04:04.102 SYMLINK libspdk_env_dpdk_rpc.so 00:04:04.102 LIB libspdk_scheduler_dpdk_governor.a 00:04:04.102 LIB libspdk_scheduler_gscheduler.a 00:04:04.102 LIB libspdk_keyring_linux.a 00:04:04.102 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:04.102 LIB libspdk_keyring_file.a 00:04:04.102 SO libspdk_scheduler_gscheduler.so.4.0 00:04:04.102 SO libspdk_keyring_linux.so.1.0 00:04:04.102 LIB libspdk_accel_error.a 00:04:04.102 LIB libspdk_scheduler_dynamic.a 00:04:04.102 LIB libspdk_accel_ioat.a 00:04:04.102 LIB libspdk_accel_iaa.a 00:04:04.102 SO libspdk_keyring_file.so.2.0 00:04:04.102 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:04.102 SO libspdk_scheduler_dynamic.so.4.0 00:04:04.102 SO libspdk_accel_error.so.2.0 00:04:04.102 SO libspdk_accel_ioat.so.6.0 00:04:04.102 SO libspdk_accel_iaa.so.3.0 00:04:04.102 SYMLINK libspdk_scheduler_gscheduler.so 00:04:04.102 SYMLINK libspdk_keyring_linux.so 00:04:04.102 LIB libspdk_accel_dsa.a 00:04:04.102 SYMLINK libspdk_keyring_file.so 00:04:04.102 LIB libspdk_blob_bdev.a 00:04:04.102 SO libspdk_accel_dsa.so.5.0 00:04:04.102 SYMLINK libspdk_scheduler_dynamic.so 00:04:04.102 SYMLINK libspdk_accel_error.so 00:04:04.102 SYMLINK libspdk_accel_ioat.so 00:04:04.102 SYMLINK libspdk_accel_iaa.so 00:04:04.359 SO libspdk_blob_bdev.so.11.0 00:04:04.359 SYMLINK libspdk_accel_dsa.so 00:04:04.359 LIB libspdk_vfu_device.a 00:04:04.359 SYMLINK libspdk_blob_bdev.so 00:04:04.359 SO libspdk_vfu_device.so.3.0 00:04:04.359 SYMLINK libspdk_vfu_device.so 00:04:04.359 LIB libspdk_fsdev_aio.a 00:04:04.618 LIB libspdk_sock_posix.a 00:04:04.618 SO libspdk_fsdev_aio.so.1.0 00:04:04.618 SO libspdk_sock_posix.so.6.0 00:04:04.618 SYMLINK libspdk_fsdev_aio.so 00:04:04.618 SYMLINK libspdk_sock_posix.so 00:04:04.618 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.618 CC module/bdev/gpt/gpt.o 00:04:04.618 CC module/bdev/error/vbdev_error_rpc.o 00:04:04.618 CC module/bdev/error/vbdev_error.o 00:04:04.876 CC module/bdev/nvme/bdev_nvme.o 00:04:04.876 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:04.876 CC module/bdev/null/bdev_null_rpc.o 00:04:04.876 CC module/bdev/nvme/nvme_rpc.o 00:04:04.876 CC module/bdev/null/bdev_null.o 00:04:04.876 CC module/bdev/nvme/vbdev_opal.o 00:04:04.876 CC module/bdev/delay/vbdev_delay.o 00:04:04.876 CC module/bdev/nvme/bdev_mdns_client.o 00:04:04.876 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:04.876 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:04.876 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:04.876 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:04.876 CC module/bdev/split/vbdev_split.o 00:04:04.876 CC module/bdev/malloc/bdev_malloc.o 00:04:04.876 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.876 CC module/bdev/split/vbdev_split_rpc.o 00:04:04.876 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.876 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.876 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:04.876 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:04.876 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:04.876 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:04.876 CC module/bdev/iscsi/bdev_iscsi.o 00:04:04.876 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:04.876 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:04.876 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:04.876 CC module/bdev/ftl/bdev_ftl.o 00:04:04.876 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:04.876 CC module/bdev/raid/bdev_raid.o 00:04:04.876 CC module/bdev/raid/bdev_raid_rpc.o 00:04:04.876 CC module/bdev/aio/bdev_aio.o 00:04:04.876 CC module/bdev/passthru/vbdev_passthru.o 00:04:04.876 CC module/bdev/raid/raid0.o 00:04:04.876 CC module/bdev/raid/bdev_raid_sb.o 00:04:04.876 CC module/bdev/aio/bdev_aio_rpc.o 00:04:04.876 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:04.876 CC module/bdev/raid/raid1.o 00:04:04.876 CC module/bdev/raid/concat.o 00:04:05.134 LIB libspdk_blobfs_bdev.a 00:04:05.134 LIB libspdk_bdev_split.a 00:04:05.134 LIB libspdk_bdev_error.a 00:04:05.134 SO libspdk_blobfs_bdev.so.6.0 00:04:05.134 SO libspdk_bdev_split.so.6.0 00:04:05.134 LIB libspdk_bdev_null.a 00:04:05.134 SO libspdk_bdev_error.so.6.0 00:04:05.134 LIB libspdk_bdev_ftl.a 00:04:05.134 LIB libspdk_bdev_gpt.a 00:04:05.134 SO libspdk_bdev_null.so.6.0 00:04:05.134 SO libspdk_bdev_ftl.so.6.0 00:04:05.134 SO libspdk_bdev_gpt.so.6.0 00:04:05.134 SYMLINK libspdk_bdev_split.so 00:04:05.134 SYMLINK libspdk_blobfs_bdev.so 00:04:05.134 LIB libspdk_bdev_passthru.a 00:04:05.134 SYMLINK libspdk_bdev_error.so 00:04:05.134 LIB libspdk_bdev_delay.a 00:04:05.134 LIB libspdk_bdev_aio.a 00:04:05.134 LIB libspdk_bdev_zone_block.a 00:04:05.134 LIB libspdk_bdev_iscsi.a 00:04:05.134 SYMLINK libspdk_bdev_null.so 00:04:05.134 LIB libspdk_bdev_malloc.a 00:04:05.134 SO libspdk_bdev_zone_block.so.6.0 00:04:05.134 SO libspdk_bdev_passthru.so.6.0 00:04:05.134 SYMLINK libspdk_bdev_ftl.so 00:04:05.134 SO libspdk_bdev_aio.so.6.0 00:04:05.134 SO libspdk_bdev_delay.so.6.0 00:04:05.134 SO libspdk_bdev_iscsi.so.6.0 00:04:05.134 SYMLINK libspdk_bdev_gpt.so 00:04:05.134 SO libspdk_bdev_malloc.so.6.0 00:04:05.134 SYMLINK libspdk_bdev_passthru.so 00:04:05.134 SYMLINK libspdk_bdev_zone_block.so 00:04:05.134 SYMLINK libspdk_bdev_aio.so 00:04:05.134 SYMLINK libspdk_bdev_iscsi.so 00:04:05.134 LIB libspdk_bdev_lvol.a 00:04:05.134 SYMLINK libspdk_bdev_delay.so 00:04:05.393 SYMLINK libspdk_bdev_malloc.so 00:04:05.393 SO libspdk_bdev_lvol.so.6.0 00:04:05.393 LIB libspdk_bdev_virtio.a 00:04:05.393 SO libspdk_bdev_virtio.so.6.0 00:04:05.393 SYMLINK libspdk_bdev_lvol.so 00:04:05.393 SYMLINK libspdk_bdev_virtio.so 00:04:05.652 LIB libspdk_bdev_raid.a 00:04:05.652 SO libspdk_bdev_raid.so.6.0 00:04:05.652 SYMLINK libspdk_bdev_raid.so 00:04:06.589 LIB libspdk_bdev_nvme.a 00:04:06.589 SO libspdk_bdev_nvme.so.7.1 00:04:06.589 SYMLINK libspdk_bdev_nvme.so 00:04:07.527 CC module/event/subsystems/vmd/vmd.o 00:04:07.527 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.527 CC module/event/subsystems/keyring/keyring.o 00:04:07.527 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.527 CC module/event/subsystems/sock/sock.o 00:04:07.527 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.527 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.527 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:07.527 CC module/event/subsystems/fsdev/fsdev.o 00:04:07.527 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.527 LIB libspdk_event_fsdev.a 00:04:07.527 LIB libspdk_event_vmd.a 00:04:07.527 LIB libspdk_event_keyring.a 00:04:07.527 LIB libspdk_event_scheduler.a 00:04:07.527 LIB libspdk_event_vfu_tgt.a 00:04:07.527 LIB libspdk_event_sock.a 00:04:07.527 LIB libspdk_event_vhost_blk.a 00:04:07.527 LIB libspdk_event_iobuf.a 00:04:07.527 SO libspdk_event_fsdev.so.1.0 00:04:07.527 SO libspdk_event_keyring.so.1.0 00:04:07.527 SO libspdk_event_vmd.so.6.0 00:04:07.527 SO libspdk_event_vhost_blk.so.3.0 00:04:07.527 SO libspdk_event_scheduler.so.4.0 00:04:07.527 SO libspdk_event_vfu_tgt.so.3.0 00:04:07.527 SO libspdk_event_sock.so.5.0 00:04:07.527 SO libspdk_event_iobuf.so.3.0 00:04:07.527 SYMLINK libspdk_event_fsdev.so 00:04:07.527 SYMLINK libspdk_event_keyring.so 00:04:07.527 SYMLINK libspdk_event_vhost_blk.so 00:04:07.527 SYMLINK libspdk_event_vfu_tgt.so 00:04:07.527 SYMLINK libspdk_event_vmd.so 00:04:07.527 SYMLINK libspdk_event_scheduler.so 00:04:07.527 SYMLINK libspdk_event_sock.so 00:04:07.527 SYMLINK libspdk_event_iobuf.so 00:04:07.786 CC module/event/subsystems/accel/accel.o 00:04:08.044 LIB libspdk_event_accel.a 00:04:08.044 SO libspdk_event_accel.so.6.0 00:04:08.044 SYMLINK libspdk_event_accel.so 00:04:08.303 CC module/event/subsystems/bdev/bdev.o 00:04:08.561 LIB libspdk_event_bdev.a 00:04:08.561 SO libspdk_event_bdev.so.6.0 00:04:08.561 SYMLINK libspdk_event_bdev.so 00:04:09.131 CC module/event/subsystems/scsi/scsi.o 00:04:09.131 CC module/event/subsystems/ublk/ublk.o 00:04:09.131 CC module/event/subsystems/nbd/nbd.o 00:04:09.131 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:09.131 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:09.131 LIB libspdk_event_ublk.a 00:04:09.131 LIB libspdk_event_nbd.a 00:04:09.131 LIB libspdk_event_scsi.a 00:04:09.131 SO libspdk_event_ublk.so.3.0 00:04:09.131 SO libspdk_event_nbd.so.6.0 00:04:09.131 SO libspdk_event_scsi.so.6.0 00:04:09.131 LIB libspdk_event_nvmf.a 00:04:09.131 SYMLINK libspdk_event_nbd.so 00:04:09.131 SYMLINK libspdk_event_ublk.so 00:04:09.131 SO libspdk_event_nvmf.so.6.0 00:04:09.132 SYMLINK libspdk_event_scsi.so 00:04:09.391 SYMLINK libspdk_event_nvmf.so 00:04:09.650 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.650 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.650 LIB libspdk_event_vhost_scsi.a 00:04:09.650 LIB libspdk_event_iscsi.a 00:04:09.650 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.650 SO libspdk_event_iscsi.so.6.0 00:04:09.908 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.908 SYMLINK libspdk_event_iscsi.so 00:04:09.908 SO libspdk.so.6.0 00:04:09.908 SYMLINK libspdk.so 00:04:10.490 CC app/spdk_nvme_perf/perf.o 00:04:10.490 CC app/spdk_lspci/spdk_lspci.o 00:04:10.490 CC app/spdk_top/spdk_top.o 00:04:10.490 CXX app/trace/trace.o 00:04:10.490 CC app/trace_record/trace_record.o 00:04:10.490 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.490 CC test/rpc_client/rpc_client_test.o 00:04:10.490 CC app/spdk_nvme_identify/identify.o 00:04:10.490 TEST_HEADER include/spdk/assert.h 00:04:10.490 TEST_HEADER include/spdk/accel_module.h 00:04:10.490 TEST_HEADER include/spdk/barrier.h 00:04:10.490 TEST_HEADER include/spdk/accel.h 00:04:10.490 TEST_HEADER include/spdk/base64.h 00:04:10.490 TEST_HEADER include/spdk/bdev.h 00:04:10.490 TEST_HEADER include/spdk/bdev_module.h 00:04:10.490 TEST_HEADER include/spdk/bdev_zone.h 00:04:10.490 TEST_HEADER include/spdk/bit_pool.h 00:04:10.490 TEST_HEADER include/spdk/bit_array.h 00:04:10.490 TEST_HEADER include/spdk/blob_bdev.h 00:04:10.490 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:10.490 TEST_HEADER include/spdk/blobfs.h 00:04:10.490 TEST_HEADER include/spdk/blob.h 00:04:10.490 TEST_HEADER include/spdk/conf.h 00:04:10.490 TEST_HEADER include/spdk/config.h 00:04:10.490 TEST_HEADER include/spdk/cpuset.h 00:04:10.490 TEST_HEADER include/spdk/crc16.h 00:04:10.490 TEST_HEADER include/spdk/crc32.h 00:04:10.490 TEST_HEADER include/spdk/crc64.h 00:04:10.490 TEST_HEADER include/spdk/dma.h 00:04:10.490 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:10.490 TEST_HEADER include/spdk/dif.h 00:04:10.490 TEST_HEADER include/spdk/endian.h 00:04:10.490 TEST_HEADER include/spdk/env_dpdk.h 00:04:10.490 TEST_HEADER include/spdk/event.h 00:04:10.490 TEST_HEADER include/spdk/env.h 00:04:10.490 TEST_HEADER include/spdk/fd_group.h 00:04:10.490 TEST_HEADER include/spdk/fd.h 00:04:10.490 TEST_HEADER include/spdk/file.h 00:04:10.490 TEST_HEADER include/spdk/fsdev_module.h 00:04:10.490 TEST_HEADER include/spdk/fsdev.h 00:04:10.490 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:10.490 TEST_HEADER include/spdk/ftl.h 00:04:10.490 TEST_HEADER include/spdk/hexlify.h 00:04:10.490 CC app/iscsi_tgt/iscsi_tgt.o 00:04:10.490 TEST_HEADER include/spdk/gpt_spec.h 00:04:10.490 TEST_HEADER include/spdk/histogram_data.h 00:04:10.490 TEST_HEADER include/spdk/idxd.h 00:04:10.490 TEST_HEADER include/spdk/init.h 00:04:10.490 TEST_HEADER include/spdk/idxd_spec.h 00:04:10.490 TEST_HEADER include/spdk/ioat_spec.h 00:04:10.490 TEST_HEADER include/spdk/ioat.h 00:04:10.490 TEST_HEADER include/spdk/json.h 00:04:10.490 TEST_HEADER include/spdk/iscsi_spec.h 00:04:10.490 TEST_HEADER include/spdk/jsonrpc.h 00:04:10.490 TEST_HEADER include/spdk/keyring_module.h 00:04:10.490 TEST_HEADER include/spdk/keyring.h 00:04:10.490 CC app/spdk_dd/spdk_dd.o 00:04:10.490 TEST_HEADER include/spdk/likely.h 00:04:10.490 TEST_HEADER include/spdk/md5.h 00:04:10.490 TEST_HEADER include/spdk/log.h 00:04:10.490 TEST_HEADER include/spdk/memory.h 00:04:10.490 TEST_HEADER include/spdk/lvol.h 00:04:10.490 TEST_HEADER include/spdk/mmio.h 00:04:10.490 CC app/nvmf_tgt/nvmf_main.o 00:04:10.490 TEST_HEADER include/spdk/nbd.h 00:04:10.490 TEST_HEADER include/spdk/notify.h 00:04:10.490 TEST_HEADER include/spdk/net.h 00:04:10.490 TEST_HEADER include/spdk/nvme.h 00:04:10.490 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:10.490 TEST_HEADER include/spdk/nvme_intel.h 00:04:10.490 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:10.490 TEST_HEADER include/spdk/nvme_spec.h 00:04:10.490 CC app/spdk_tgt/spdk_tgt.o 00:04:10.490 TEST_HEADER include/spdk/nvme_zns.h 00:04:10.490 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:10.490 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:10.490 TEST_HEADER include/spdk/nvmf.h 00:04:10.490 TEST_HEADER include/spdk/nvmf_spec.h 00:04:10.490 TEST_HEADER include/spdk/nvmf_transport.h 00:04:10.490 TEST_HEADER include/spdk/opal.h 00:04:10.490 TEST_HEADER include/spdk/queue.h 00:04:10.490 TEST_HEADER include/spdk/opal_spec.h 00:04:10.490 TEST_HEADER include/spdk/pipe.h 00:04:10.490 TEST_HEADER include/spdk/pci_ids.h 00:04:10.490 TEST_HEADER include/spdk/rpc.h 00:04:10.490 TEST_HEADER include/spdk/scheduler.h 00:04:10.490 TEST_HEADER include/spdk/reduce.h 00:04:10.490 TEST_HEADER include/spdk/sock.h 00:04:10.490 TEST_HEADER include/spdk/scsi.h 00:04:10.490 TEST_HEADER include/spdk/stdinc.h 00:04:10.490 TEST_HEADER include/spdk/scsi_spec.h 00:04:10.490 TEST_HEADER include/spdk/string.h 00:04:10.490 TEST_HEADER include/spdk/trace_parser.h 00:04:10.490 TEST_HEADER include/spdk/tree.h 00:04:10.490 TEST_HEADER include/spdk/thread.h 00:04:10.490 TEST_HEADER include/spdk/trace.h 00:04:10.490 TEST_HEADER include/spdk/ublk.h 00:04:10.490 TEST_HEADER include/spdk/uuid.h 00:04:10.490 TEST_HEADER include/spdk/util.h 00:04:10.490 TEST_HEADER include/spdk/version.h 00:04:10.490 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:10.490 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:10.490 TEST_HEADER include/spdk/vhost.h 00:04:10.490 TEST_HEADER include/spdk/vmd.h 00:04:10.490 TEST_HEADER include/spdk/xor.h 00:04:10.490 TEST_HEADER include/spdk/zipf.h 00:04:10.490 CXX test/cpp_headers/accel.o 00:04:10.490 CXX test/cpp_headers/barrier.o 00:04:10.490 CXX test/cpp_headers/accel_module.o 00:04:10.490 CXX test/cpp_headers/assert.o 00:04:10.490 CXX test/cpp_headers/base64.o 00:04:10.490 CXX test/cpp_headers/bdev.o 00:04:10.490 CXX test/cpp_headers/bdev_module.o 00:04:10.490 CXX test/cpp_headers/bit_array.o 00:04:10.490 CXX test/cpp_headers/bit_pool.o 00:04:10.490 CXX test/cpp_headers/blob_bdev.o 00:04:10.490 CXX test/cpp_headers/bdev_zone.o 00:04:10.490 CXX test/cpp_headers/blobfs_bdev.o 00:04:10.490 CXX test/cpp_headers/blobfs.o 00:04:10.490 CXX test/cpp_headers/blob.o 00:04:10.490 CXX test/cpp_headers/conf.o 00:04:10.490 CXX test/cpp_headers/cpuset.o 00:04:10.490 CXX test/cpp_headers/config.o 00:04:10.490 CXX test/cpp_headers/crc16.o 00:04:10.490 CXX test/cpp_headers/crc64.o 00:04:10.490 CXX test/cpp_headers/crc32.o 00:04:10.490 CXX test/cpp_headers/dif.o 00:04:10.490 CXX test/cpp_headers/dma.o 00:04:10.490 CXX test/cpp_headers/env_dpdk.o 00:04:10.490 CXX test/cpp_headers/endian.o 00:04:10.490 CXX test/cpp_headers/env.o 00:04:10.490 CXX test/cpp_headers/event.o 00:04:10.490 CXX test/cpp_headers/fd_group.o 00:04:10.490 CXX test/cpp_headers/fd.o 00:04:10.490 CXX test/cpp_headers/file.o 00:04:10.491 CXX test/cpp_headers/fsdev.o 00:04:10.491 CXX test/cpp_headers/ftl.o 00:04:10.491 CXX test/cpp_headers/fuse_dispatcher.o 00:04:10.491 CXX test/cpp_headers/hexlify.o 00:04:10.491 CXX test/cpp_headers/fsdev_module.o 00:04:10.491 CXX test/cpp_headers/gpt_spec.o 00:04:10.491 CXX test/cpp_headers/idxd.o 00:04:10.491 CXX test/cpp_headers/histogram_data.o 00:04:10.491 CXX test/cpp_headers/init.o 00:04:10.491 CXX test/cpp_headers/idxd_spec.o 00:04:10.491 CXX test/cpp_headers/ioat.o 00:04:10.491 CXX test/cpp_headers/ioat_spec.o 00:04:10.491 CXX test/cpp_headers/json.o 00:04:10.491 CXX test/cpp_headers/jsonrpc.o 00:04:10.491 CXX test/cpp_headers/iscsi_spec.o 00:04:10.491 CXX test/cpp_headers/keyring_module.o 00:04:10.491 CXX test/cpp_headers/keyring.o 00:04:10.491 CXX test/cpp_headers/log.o 00:04:10.491 CXX test/cpp_headers/md5.o 00:04:10.491 CXX test/cpp_headers/lvol.o 00:04:10.491 CXX test/cpp_headers/memory.o 00:04:10.491 CXX test/cpp_headers/likely.o 00:04:10.491 CXX test/cpp_headers/mmio.o 00:04:10.491 CXX test/cpp_headers/nbd.o 00:04:10.491 CXX test/cpp_headers/notify.o 00:04:10.491 CXX test/cpp_headers/net.o 00:04:10.491 CXX test/cpp_headers/nvme.o 00:04:10.491 CXX test/cpp_headers/nvme_ocssd.o 00:04:10.491 CXX test/cpp_headers/nvme_spec.o 00:04:10.491 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:10.491 CXX test/cpp_headers/nvme_intel.o 00:04:10.491 CXX test/cpp_headers/nvme_zns.o 00:04:10.491 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:10.491 CXX test/cpp_headers/nvmf.o 00:04:10.491 CXX test/cpp_headers/nvmf_cmd.o 00:04:10.491 CXX test/cpp_headers/nvmf_transport.o 00:04:10.491 CXX test/cpp_headers/nvmf_spec.o 00:04:10.491 CXX test/cpp_headers/opal.o 00:04:10.491 CXX test/cpp_headers/opal_spec.o 00:04:10.491 CXX test/cpp_headers/pci_ids.o 00:04:10.491 CXX test/cpp_headers/pipe.o 00:04:10.491 CXX test/cpp_headers/queue.o 00:04:10.491 CXX test/cpp_headers/reduce.o 00:04:10.491 CXX test/cpp_headers/rpc.o 00:04:10.491 CXX test/cpp_headers/scheduler.o 00:04:10.491 CXX test/cpp_headers/scsi_spec.o 00:04:10.491 CXX test/cpp_headers/scsi.o 00:04:10.491 CC app/fio/nvme/fio_plugin.o 00:04:10.491 CXX test/cpp_headers/sock.o 00:04:10.491 CC test/app/histogram_perf/histogram_perf.o 00:04:10.491 CC examples/util/zipf/zipf.o 00:04:10.491 CXX test/cpp_headers/stdinc.o 00:04:10.491 CXX test/cpp_headers/string.o 00:04:10.491 CXX test/cpp_headers/thread.o 00:04:10.491 CXX test/cpp_headers/trace.o 00:04:10.491 CC examples/ioat/verify/verify.o 00:04:10.491 CXX test/cpp_headers/tree.o 00:04:10.491 CXX test/cpp_headers/trace_parser.o 00:04:10.491 CC test/app/stub/stub.o 00:04:10.491 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:10.491 CC test/thread/poller_perf/poller_perf.o 00:04:10.491 CC test/env/memory/memory_ut.o 00:04:10.491 LINK spdk_lspci 00:04:10.491 CC examples/ioat/perf/perf.o 00:04:10.491 CC test/app/jsoncat/jsoncat.o 00:04:10.491 CC test/env/vtophys/vtophys.o 00:04:10.491 CC test/env/pci/pci_ut.o 00:04:10.491 CC app/fio/bdev/fio_plugin.o 00:04:10.491 CXX test/cpp_headers/ublk.o 00:04:10.765 CC test/app/bdev_svc/bdev_svc.o 00:04:10.765 CC test/dma/test_dma/test_dma.o 00:04:10.765 CXX test/cpp_headers/util.o 00:04:10.765 CXX test/cpp_headers/uuid.o 00:04:10.765 LINK spdk_nvme_discover 00:04:10.765 LINK nvmf_tgt 00:04:10.765 LINK rpc_client_test 00:04:10.765 LINK interrupt_tgt 00:04:11.032 LINK spdk_trace_record 00:04:11.032 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.032 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.032 LINK iscsi_tgt 00:04:11.032 LINK jsoncat 00:04:11.032 LINK vtophys 00:04:11.032 LINK env_dpdk_post_init 00:04:11.032 CXX test/cpp_headers/version.o 00:04:11.291 CXX test/cpp_headers/vfio_user_pci.o 00:04:11.291 CXX test/cpp_headers/vfio_user_spec.o 00:04:11.291 CXX test/cpp_headers/vhost.o 00:04:11.291 CXX test/cpp_headers/vmd.o 00:04:11.291 CXX test/cpp_headers/xor.o 00:04:11.291 CXX test/cpp_headers/zipf.o 00:04:11.291 LINK spdk_dd 00:04:11.291 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:11.291 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:11.291 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:11.291 LINK histogram_perf 00:04:11.291 LINK spdk_trace 00:04:11.291 LINK zipf 00:04:11.291 LINK poller_perf 00:04:11.291 LINK spdk_tgt 00:04:11.291 LINK stub 00:04:11.291 LINK bdev_svc 00:04:11.291 LINK ioat_perf 00:04:11.291 LINK verify 00:04:11.549 LINK pci_ut 00:04:11.549 LINK spdk_bdev 00:04:11.549 LINK spdk_nvme_perf 00:04:11.549 LINK spdk_nvme_identify 00:04:11.549 LINK nvme_fuzz 00:04:11.549 LINK spdk_nvme 00:04:11.549 LINK vhost_fuzz 00:04:11.549 LINK spdk_top 00:04:11.806 LINK test_dma 00:04:11.806 CC app/vhost/vhost.o 00:04:11.806 LINK mem_callbacks 00:04:11.806 CC examples/vmd/lsvmd/lsvmd.o 00:04:11.806 CC test/event/reactor_perf/reactor_perf.o 00:04:11.806 CC test/event/event_perf/event_perf.o 00:04:11.806 CC examples/vmd/led/led.o 00:04:11.806 CC examples/sock/hello_world/hello_sock.o 00:04:11.806 CC examples/idxd/perf/perf.o 00:04:11.806 CC test/event/reactor/reactor.o 00:04:11.806 CC test/event/app_repeat/app_repeat.o 00:04:11.806 CC test/event/scheduler/scheduler.o 00:04:11.806 CC examples/thread/thread/thread_ex.o 00:04:11.806 LINK vhost 00:04:11.806 LINK event_perf 00:04:11.806 LINK lsvmd 00:04:11.806 LINK led 00:04:11.806 LINK reactor_perf 00:04:12.063 LINK reactor 00:04:12.063 LINK app_repeat 00:04:12.063 LINK hello_sock 00:04:12.063 LINK idxd_perf 00:04:12.063 LINK scheduler 00:04:12.063 LINK thread 00:04:12.063 LINK memory_ut 00:04:12.063 CC test/nvme/startup/startup.o 00:04:12.063 CC test/nvme/boot_partition/boot_partition.o 00:04:12.063 CC test/nvme/compliance/nvme_compliance.o 00:04:12.064 CC test/nvme/reset/reset.o 00:04:12.064 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:12.064 CC test/nvme/err_injection/err_injection.o 00:04:12.064 CC test/nvme/connect_stress/connect_stress.o 00:04:12.064 CC test/nvme/simple_copy/simple_copy.o 00:04:12.064 CC test/nvme/sgl/sgl.o 00:04:12.064 CC test/nvme/aer/aer.o 00:04:12.064 CC test/nvme/e2edp/nvme_dp.o 00:04:12.064 CC test/nvme/fused_ordering/fused_ordering.o 00:04:12.064 CC test/nvme/cuse/cuse.o 00:04:12.064 CC test/nvme/overhead/overhead.o 00:04:12.064 CC test/nvme/reserve/reserve.o 00:04:12.064 CC test/nvme/fdp/fdp.o 00:04:12.322 CC test/accel/dif/dif.o 00:04:12.322 CC test/blobfs/mkfs/mkfs.o 00:04:12.322 CC test/lvol/esnap/esnap.o 00:04:12.322 LINK startup 00:04:12.322 LINK err_injection 00:04:12.322 LINK boot_partition 00:04:12.322 LINK connect_stress 00:04:12.322 LINK doorbell_aers 00:04:12.322 LINK fused_ordering 00:04:12.322 LINK reserve 00:04:12.322 LINK simple_copy 00:04:12.322 LINK reset 00:04:12.322 LINK overhead 00:04:12.322 LINK aer 00:04:12.322 LINK nvme_dp 00:04:12.322 LINK sgl 00:04:12.322 LINK mkfs 00:04:12.581 CC examples/nvme/arbitration/arbitration.o 00:04:12.581 LINK nvme_compliance 00:04:12.581 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.581 CC examples/nvme/hello_world/hello_world.o 00:04:12.581 CC examples/nvme/abort/abort.o 00:04:12.581 CC examples/nvme/hotplug/hotplug.o 00:04:12.581 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.581 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:12.581 LINK fdp 00:04:12.581 CC examples/nvme/reconnect/reconnect.o 00:04:12.581 CC examples/accel/perf/accel_perf.o 00:04:12.581 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:12.581 LINK iscsi_fuzz 00:04:12.581 CC examples/blob/hello_world/hello_blob.o 00:04:12.581 CC examples/blob/cli/blobcli.o 00:04:12.581 LINK pmr_persistence 00:04:12.581 LINK cmb_copy 00:04:12.581 LINK hotplug 00:04:12.581 LINK hello_world 00:04:12.839 LINK arbitration 00:04:12.839 LINK dif 00:04:12.839 LINK reconnect 00:04:12.839 LINK abort 00:04:12.839 LINK hello_fsdev 00:04:12.839 LINK nvme_manage 00:04:12.839 LINK hello_blob 00:04:12.839 LINK accel_perf 00:04:13.098 LINK blobcli 00:04:13.098 LINK cuse 00:04:13.356 CC test/bdev/bdevio/bdevio.o 00:04:13.356 CC examples/bdev/hello_world/hello_bdev.o 00:04:13.356 CC examples/bdev/bdevperf/bdevperf.o 00:04:13.616 LINK bdevio 00:04:13.616 LINK hello_bdev 00:04:14.185 LINK bdevperf 00:04:14.442 CC examples/nvmf/nvmf/nvmf.o 00:04:14.700 LINK nvmf 00:04:15.634 LINK esnap 00:04:15.893 00:04:15.893 real 0m52.658s 00:04:15.893 user 7m55.578s 00:04:15.893 sys 3m55.547s 00:04:15.893 12:18:21 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:15.893 12:18:21 make -- common/autotest_common.sh@10 -- $ set +x 00:04:15.893 ************************************ 00:04:15.893 END TEST make 00:04:15.893 ************************************ 00:04:15.893 12:18:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:15.893 12:18:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:15.893 12:18:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:15.893 12:18:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.893 12:18:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:15.893 12:18:21 -- pm/common@44 -- $ pid=629749 00:04:15.893 12:18:21 -- pm/common@50 -- $ kill -TERM 629749 00:04:15.893 12:18:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.893 12:18:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:15.893 12:18:21 -- pm/common@44 -- $ pid=629750 00:04:15.893 12:18:21 -- pm/common@50 -- $ kill -TERM 629750 00:04:15.893 12:18:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.893 12:18:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:15.893 12:18:21 -- pm/common@44 -- $ pid=629753 00:04:15.893 12:18:21 -- pm/common@50 -- $ kill -TERM 629753 00:04:15.893 12:18:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.893 12:18:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:15.893 12:18:21 -- pm/common@44 -- $ pid=629775 00:04:15.893 12:18:21 -- pm/common@50 -- $ sudo -E kill -TERM 629775 00:04:15.893 12:18:21 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:15.893 12:18:21 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:16.152 12:18:21 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.152 12:18:21 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.152 12:18:21 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.152 12:18:21 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.152 12:18:21 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.152 12:18:21 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.152 12:18:21 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.152 12:18:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.152 12:18:21 -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.152 12:18:21 -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.152 12:18:21 -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.152 12:18:21 -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.152 12:18:21 -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.152 12:18:21 -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.152 12:18:21 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.152 12:18:21 -- scripts/common.sh@344 -- # case "$op" in 00:04:16.152 12:18:21 -- scripts/common.sh@345 -- # : 1 00:04:16.152 12:18:21 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.152 12:18:21 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.152 12:18:21 -- scripts/common.sh@365 -- # decimal 1 00:04:16.152 12:18:21 -- scripts/common.sh@353 -- # local d=1 00:04:16.152 12:18:21 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.153 12:18:21 -- scripts/common.sh@355 -- # echo 1 00:04:16.153 12:18:21 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.153 12:18:21 -- scripts/common.sh@366 -- # decimal 2 00:04:16.153 12:18:21 -- scripts/common.sh@353 -- # local d=2 00:04:16.153 12:18:21 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.153 12:18:21 -- scripts/common.sh@355 -- # echo 2 00:04:16.153 12:18:21 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.153 12:18:21 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.153 12:18:21 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.153 12:18:21 -- scripts/common.sh@368 -- # return 0 00:04:16.153 12:18:21 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.153 12:18:21 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.153 --rc genhtml_branch_coverage=1 00:04:16.153 --rc genhtml_function_coverage=1 00:04:16.153 --rc genhtml_legend=1 00:04:16.153 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 12:18:21 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.153 --rc genhtml_branch_coverage=1 00:04:16.153 --rc genhtml_function_coverage=1 00:04:16.153 --rc genhtml_legend=1 00:04:16.153 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 12:18:21 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.153 --rc genhtml_branch_coverage=1 00:04:16.153 --rc genhtml_function_coverage=1 00:04:16.153 --rc genhtml_legend=1 00:04:16.153 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 12:18:21 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.153 --rc genhtml_branch_coverage=1 00:04:16.153 --rc genhtml_function_coverage=1 00:04:16.153 --rc genhtml_legend=1 00:04:16.153 --rc geninfo_all_blocks=1 00:04:16.153 --rc geninfo_unexecuted_blocks=1 00:04:16.153 00:04:16.153 ' 00:04:16.153 12:18:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.153 12:18:21 -- nvmf/common.sh@7 -- # uname -s 00:04:16.153 12:18:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.153 12:18:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.153 12:18:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.153 12:18:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.153 12:18:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.153 12:18:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.153 12:18:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.153 12:18:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.153 12:18:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.153 12:18:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.153 12:18:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:04:16.153 12:18:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:04:16.153 12:18:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.153 12:18:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.153 12:18:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:16.153 12:18:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.153 12:18:21 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:16.153 12:18:21 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.153 12:18:21 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.153 12:18:21 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.153 12:18:21 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.153 12:18:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.153 12:18:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.153 12:18:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.153 12:18:21 -- paths/export.sh@5 -- # export PATH 00:04:16.153 12:18:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.153 12:18:21 -- nvmf/common.sh@51 -- # : 0 00:04:16.153 12:18:21 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.153 12:18:21 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.153 12:18:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.153 12:18:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.153 12:18:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.153 12:18:21 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.153 12:18:21 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.153 12:18:21 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.153 12:18:21 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.153 12:18:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:16.153 12:18:21 -- spdk/autotest.sh@32 -- # uname -s 00:04:16.153 12:18:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:16.153 12:18:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:16.153 12:18:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:16.153 12:18:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:16.153 12:18:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:16.153 12:18:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:16.153 12:18:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:16.153 12:18:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:16.153 12:18:21 -- spdk/autotest.sh@48 -- # udevadm_pid=693622 00:04:16.153 12:18:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:16.153 12:18:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:16.153 12:18:21 -- pm/common@17 -- # local monitor 00:04:16.153 12:18:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.153 12:18:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.153 12:18:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.153 12:18:21 -- pm/common@21 -- # date +%s 00:04:16.153 12:18:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.153 12:18:21 -- pm/common@21 -- # date +%s 00:04:16.153 12:18:21 -- pm/common@25 -- # sleep 1 00:04:16.153 12:18:21 -- pm/common@21 -- # date +%s 00:04:16.153 12:18:21 -- pm/common@21 -- # date +%s 00:04:16.153 12:18:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101501 00:04:16.153 12:18:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101501 00:04:16.153 12:18:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101501 00:04:16.153 12:18:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101501 00:04:16.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101501_collect-cpu-load.pm.log 00:04:16.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101501_collect-vmstat.pm.log 00:04:16.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101501_collect-cpu-temp.pm.log 00:04:16.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101501_collect-bmc-pm.bmc.pm.log 00:04:17.090 12:18:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:17.090 12:18:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:17.090 12:18:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.090 12:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:17.090 12:18:22 -- spdk/autotest.sh@59 -- # create_test_list 00:04:17.090 12:18:22 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:17.090 12:18:22 -- common/autotest_common.sh@10 -- # set +x 00:04:17.383 12:18:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:17.383 12:18:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.383 12:18:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.383 12:18:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:17.383 12:18:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.383 12:18:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:17.383 12:18:22 -- common/autotest_common.sh@1457 -- # uname 00:04:17.383 12:18:22 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:17.383 12:18:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:17.383 12:18:22 -- common/autotest_common.sh@1477 -- # uname 00:04:17.383 12:18:22 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:17.383 12:18:22 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:17.383 12:18:22 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:17.383 lcov: LCOV version 1.15 00:04:17.383 12:18:22 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:27.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:27.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:42.351 12:18:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:42.351 12:18:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.351 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.351 12:18:45 -- spdk/autotest.sh@78 -- # rm -f 00:04:42.351 12:18:45 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.729 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:04:43.729 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:43.729 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:d9:00.0 (8086 0a54): Already using the nvme driver 00:04:43.729 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:43.729 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:43.988 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:43.988 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:43.988 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:43.988 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:43.988 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:43.988 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:43.988 12:18:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:43.988 12:18:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:43.988 12:18:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:43.988 12:18:49 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:43.988 12:18:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:43.988 12:18:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:43.988 12:18:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:43.988 12:18:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:43.988 12:18:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:43.988 12:18:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:43.988 12:18:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:43.988 12:18:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:43.988 12:18:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.988 12:18:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:43.989 12:18:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:43.989 12:18:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:43.989 12:18:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:43.989 No valid GPT data, bailing 00:04:43.989 12:18:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # pt= 00:04:44.246 12:18:49 -- scripts/common.sh@395 -- # return 1 00:04:44.246 12:18:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:44.246 1+0 records in 00:04:44.246 1+0 records out 00:04:44.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00159553 s, 657 MB/s 00:04:44.246 12:18:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.246 12:18:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.246 12:18:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:44.246 12:18:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:44.246 12:18:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:44.246 No valid GPT data, bailing 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # pt= 00:04:44.246 12:18:49 -- scripts/common.sh@395 -- # return 1 00:04:44.246 12:18:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:44.246 1+0 records in 00:04:44.246 1+0 records out 00:04:44.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392646 s, 267 MB/s 00:04:44.246 12:18:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.246 12:18:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.246 12:18:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:44.246 12:18:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:44.246 12:18:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:44.246 No valid GPT data, bailing 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # pt= 00:04:44.246 12:18:49 -- scripts/common.sh@395 -- # return 1 00:04:44.246 12:18:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:44.246 1+0 records in 00:04:44.246 1+0 records out 00:04:44.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00111693 s, 939 MB/s 00:04:44.246 12:18:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.246 12:18:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.246 12:18:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:44.246 12:18:49 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:44.246 12:18:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:44.246 No valid GPT data, bailing 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:44.246 12:18:49 -- scripts/common.sh@394 -- # pt= 00:04:44.246 12:18:49 -- scripts/common.sh@395 -- # return 1 00:04:44.246 12:18:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:44.246 1+0 records in 00:04:44.246 1+0 records out 00:04:44.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00119628 s, 877 MB/s 00:04:44.246 12:18:49 -- spdk/autotest.sh@105 -- # sync 00:04:44.246 12:18:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:44.246 12:18:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:44.246 12:18:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:50.822 12:18:56 -- spdk/autotest.sh@111 -- # uname -s 00:04:50.822 12:18:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:50.822 12:18:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:50.822 12:18:56 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.117 Hugepages 00:04:54.117 node hugesize free / total 00:04:54.117 node0 1048576kB 0 / 0 00:04:54.117 node0 2048kB 0 / 0 00:04:54.117 node1 1048576kB 0 / 0 00:04:54.117 node1 2048kB 0 / 0 00:04:54.117 00:04:54.117 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.117 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:54.117 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:54.117 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:04:54.117 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:54.117 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:54.117 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:54.117 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme3 nvme3n1 00:04:54.377 NVMe 0000:d9:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:04:54.377 12:18:59 -- spdk/autotest.sh@117 -- # uname -s 00:04:54.377 12:18:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:54.377 12:18:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:54.377 12:18:59 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.668 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.927 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.865 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.803 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.803 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.803 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.803 12:19:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:00.741 12:19:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:00.741 12:19:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:00.741 12:19:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.741 12:19:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:00.741 12:19:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:00.741 12:19:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:00.741 12:19:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.741 12:19:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.741 12:19:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.001 12:19:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:01.001 12:19:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 0000:d9:00.0 00:05:01.001 12:19:06 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.304 Waiting for block devices as requested 00:05:04.304 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:05:04.304 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:04.304 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:04.304 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:04.564 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:04.564 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:04.564 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:04.823 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:04.823 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:04.823 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.082 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:05:05.082 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.082 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.342 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.342 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.342 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.601 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:05.601 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:05.601 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.601 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:05.861 12:19:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:05.861 12:19:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:05.861 12:19:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:05.861 12:19:11 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:05.861 12:19:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:05.861 12:19:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 ]] 00:05:05.861 12:19:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:05.861 12:19:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:05.861 12:19:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:05.861 12:19:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:05.861 12:19:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:05.861 12:19:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:05.861 12:19:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:05.861 12:19:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:05.861 12:19:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:05.861 12:19:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:05.861 12:19:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:05.861 12:19:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:05.861 12:19:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:05.861 12:19:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:05.861 12:19:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:05.861 12:19:11 -- common/autotest_common.sh@1543 -- # continue 00:05:05.861 12:19:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:05.861 12:19:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:05.862 12:19:11 -- common/autotest_common.sh@1487 -- # grep 0000:5f:00.0/nvme/nvme 00:05:05.862 12:19:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:05:05.862 12:19:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:05.862 12:19:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:05.862 12:19:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:05.862 12:19:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:05.862 12:19:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:05.862 12:19:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:05.862 12:19:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:05.862 12:19:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:05.862 12:19:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:05.862 12:19:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:05.862 12:19:11 -- common/autotest_common.sh@1543 -- # continue 00:05:05.862 12:19:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:05.862 12:19:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:05.862 12:19:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:05.862 12:19:11 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:06.122 12:19:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme3 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:06.122 12:19:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:06.122 12:19:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:06.122 12:19:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1543 -- # continue 00:05:06.122 12:19:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:06.122 12:19:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d9:00.0 00:05:06.122 12:19:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:06.122 12:19:11 -- common/autotest_common.sh@1487 -- # grep 0000:d9:00.0/nvme/nvme 00:05:06.122 12:19:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:01.0/0000:d9:00.0/nvme/nvme2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:01.0/0000:d9:00.0/nvme/nvme2 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:01.0/0000:d9:00.0/nvme/nvme2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:06.122 12:19:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:06.122 12:19:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:06.122 12:19:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:06.122 12:19:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:06.122 12:19:11 -- common/autotest_common.sh@1543 -- # continue 00:05:06.122 12:19:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:06.122 12:19:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.122 12:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.122 12:19:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:06.122 12:19:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.122 12:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.122 12:19:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.413 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.413 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.673 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.612 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.547 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.547 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.547 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.547 12:19:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:11.547 12:19:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.547 12:19:17 -- common/autotest_common.sh@10 -- # set +x 00:05:11.805 12:19:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:11.805 12:19:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:11.805 12:19:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.805 12:19:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:11.805 12:19:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:11.805 12:19:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:11.805 12:19:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:11.805 12:19:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.805 12:19:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.805 12:19:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.805 12:19:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.805 12:19:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 0000:d9:00.0 00:05:11.805 12:19:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:11.805 12:19:17 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.805 12:19:17 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:11.805 12:19:17 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.805 12:19:17 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:11.805 12:19:17 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.805 12:19:17 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d9:00.0/device 00:05:11.805 12:19:17 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:11.805 12:19:17 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:11.805 12:19:17 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1572 -- # (( 4 > 0 )) 00:05:11.805 12:19:17 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 0000:d9:00.0 00:05:11.805 12:19:17 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:11.805 12:19:17 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=710255 00:05:11.805 12:19:17 -- common/autotest_common.sh@1585 -- # waitforlisten 710255 00:05:11.805 12:19:17 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.805 12:19:17 -- common/autotest_common.sh@835 -- # '[' -z 710255 ']' 00:05:11.805 12:19:17 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.805 12:19:17 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.805 12:19:17 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.805 12:19:17 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.805 12:19:17 -- common/autotest_common.sh@10 -- # set +x 00:05:11.805 [2024-11-20 12:19:17.498658] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:11.805 [2024-11-20 12:19:17.498706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710255 ] 00:05:12.094 [2024-11-20 12:19:17.574617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.094 [2024-11-20 12:19:17.613036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.658 12:19:18 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.658 12:19:18 -- common/autotest_common.sh@868 -- # return 0 00:05:12.658 12:19:18 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:12.658 12:19:18 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:12.658 12:19:18 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:15.942 nvme0n1 00:05:15.942 12:19:21 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:15.942 [2024-11-20 12:19:21.464999] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:15.942 request: 00:05:15.942 { 00:05:15.942 "nvme_ctrlr_name": "nvme0", 00:05:15.942 "password": "test", 00:05:15.942 "method": "bdev_nvme_opal_revert", 00:05:15.942 "req_id": 1 00:05:15.942 } 00:05:15.942 Got JSON-RPC error response 00:05:15.942 response: 00:05:15.942 { 00:05:15.942 "code": -32602, 00:05:15.942 "message": "Invalid parameters" 00:05:15.942 } 00:05:15.942 12:19:21 -- common/autotest_common.sh@1591 -- # true 00:05:15.942 12:19:21 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:15.942 12:19:21 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:15.942 12:19:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:5f:00.0 00:05:19.230 nvme1n1 00:05:19.230 12:19:24 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:05:19.230 [2024-11-20 12:19:24.662513] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:05:19.230 request: 00:05:19.230 { 00:05:19.230 "nvme_ctrlr_name": "nvme1", 00:05:19.230 "password": "test", 00:05:19.230 "method": "bdev_nvme_opal_revert", 00:05:19.230 "req_id": 1 00:05:19.230 } 00:05:19.230 Got JSON-RPC error response 00:05:19.230 response: 00:05:19.230 { 00:05:19.230 "code": -32602, 00:05:19.230 "message": "Invalid parameters" 00:05:19.230 } 00:05:19.230 12:19:24 -- common/autotest_common.sh@1591 -- # true 00:05:19.230 12:19:24 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:19.230 12:19:24 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:19.230 12:19:24 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme2 -t pcie -a 0000:d8:00.0 00:05:22.520 nvme2n1 00:05:22.520 12:19:27 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme2 -p test 00:05:22.520 [2024-11-20 12:19:27.848280] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme2 not support opal 00:05:22.520 request: 00:05:22.520 { 00:05:22.520 "nvme_ctrlr_name": "nvme2", 00:05:22.520 "password": "test", 00:05:22.520 "method": "bdev_nvme_opal_revert", 00:05:22.520 "req_id": 1 00:05:22.520 } 00:05:22.520 Got JSON-RPC error response 00:05:22.520 response: 00:05:22.520 { 00:05:22.520 "code": -32602, 00:05:22.520 "message": "Invalid parameters" 00:05:22.520 } 00:05:22.520 12:19:27 -- common/autotest_common.sh@1591 -- # true 00:05:22.520 12:19:27 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:22.520 12:19:27 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:22.520 12:19:27 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme3 -t pcie -a 0000:d9:00.0 00:05:25.092 nvme3n1 00:05:25.352 12:19:30 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme3 -p test 00:05:25.352 [2024-11-20 12:19:31.035565] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme3 not support opal 00:05:25.352 request: 00:05:25.352 { 00:05:25.352 "nvme_ctrlr_name": "nvme3", 00:05:25.352 "password": "test", 00:05:25.352 "method": "bdev_nvme_opal_revert", 00:05:25.352 "req_id": 1 00:05:25.352 } 00:05:25.352 Got JSON-RPC error response 00:05:25.352 response: 00:05:25.352 { 00:05:25.352 "code": -32602, 00:05:25.352 "message": "Invalid parameters" 00:05:25.352 } 00:05:25.352 12:19:31 -- common/autotest_common.sh@1591 -- # true 00:05:25.352 12:19:31 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:25.352 12:19:31 -- common/autotest_common.sh@1595 -- # killprocess 710255 00:05:25.352 12:19:31 -- common/autotest_common.sh@954 -- # '[' -z 710255 ']' 00:05:25.352 12:19:31 -- common/autotest_common.sh@958 -- # kill -0 710255 00:05:25.352 12:19:31 -- common/autotest_common.sh@959 -- # uname 00:05:25.352 12:19:31 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.352 12:19:31 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 710255 00:05:25.611 12:19:31 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.611 12:19:31 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.611 12:19:31 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 710255' 00:05:25.611 killing process with pid 710255 00:05:25.611 12:19:31 -- common/autotest_common.sh@973 -- # kill 710255 00:05:25.611 12:19:31 -- common/autotest_common.sh@978 -- # wait 710255 00:05:28.929 12:19:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:28.930 12:19:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:28.930 12:19:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:28.930 12:19:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:28.930 12:19:34 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:28.930 12:19:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.930 12:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.930 12:19:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:28.930 12:19:34 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:28.930 12:19:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.930 12:19:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.930 12:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.930 ************************************ 00:05:28.930 START TEST env 00:05:28.930 ************************************ 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:28.930 * Looking for test storage... 00:05:28.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.930 12:19:34 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.930 12:19:34 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.930 12:19:34 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.930 12:19:34 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.930 12:19:34 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.930 12:19:34 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.930 12:19:34 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.930 12:19:34 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.930 12:19:34 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.930 12:19:34 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.930 12:19:34 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.930 12:19:34 env -- scripts/common.sh@344 -- # case "$op" in 00:05:28.930 12:19:34 env -- scripts/common.sh@345 -- # : 1 00:05:28.930 12:19:34 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.930 12:19:34 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.930 12:19:34 env -- scripts/common.sh@365 -- # decimal 1 00:05:28.930 12:19:34 env -- scripts/common.sh@353 -- # local d=1 00:05:28.930 12:19:34 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.930 12:19:34 env -- scripts/common.sh@355 -- # echo 1 00:05:28.930 12:19:34 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.930 12:19:34 env -- scripts/common.sh@366 -- # decimal 2 00:05:28.930 12:19:34 env -- scripts/common.sh@353 -- # local d=2 00:05:28.930 12:19:34 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.930 12:19:34 env -- scripts/common.sh@355 -- # echo 2 00:05:28.930 12:19:34 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.930 12:19:34 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.930 12:19:34 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.930 12:19:34 env -- scripts/common.sh@368 -- # return 0 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.930 --rc genhtml_branch_coverage=1 00:05:28.930 --rc genhtml_function_coverage=1 00:05:28.930 --rc genhtml_legend=1 00:05:28.930 --rc geninfo_all_blocks=1 00:05:28.930 --rc geninfo_unexecuted_blocks=1 00:05:28.930 00:05:28.930 ' 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.930 --rc genhtml_branch_coverage=1 00:05:28.930 --rc genhtml_function_coverage=1 00:05:28.930 --rc genhtml_legend=1 00:05:28.930 --rc geninfo_all_blocks=1 00:05:28.930 --rc geninfo_unexecuted_blocks=1 00:05:28.930 00:05:28.930 ' 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.930 --rc genhtml_branch_coverage=1 00:05:28.930 --rc genhtml_function_coverage=1 00:05:28.930 --rc genhtml_legend=1 00:05:28.930 --rc geninfo_all_blocks=1 00:05:28.930 --rc geninfo_unexecuted_blocks=1 00:05:28.930 00:05:28.930 ' 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.930 --rc genhtml_branch_coverage=1 00:05:28.930 --rc genhtml_function_coverage=1 00:05:28.930 --rc genhtml_legend=1 00:05:28.930 --rc geninfo_all_blocks=1 00:05:28.930 --rc geninfo_unexecuted_blocks=1 00:05:28.930 00:05:28.930 ' 00:05:28.930 12:19:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.930 12:19:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.930 12:19:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.930 ************************************ 00:05:28.930 START TEST env_memory 00:05:28.930 ************************************ 00:05:28.930 12:19:34 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:28.930 00:05:28.930 00:05:28.930 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.930 http://cunit.sourceforge.net/ 00:05:28.930 00:05:28.930 00:05:28.930 Suite: memory 00:05:28.930 Test: alloc and free memory map ...[2024-11-20 12:19:34.648417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:28.930 passed 00:05:29.257 Test: mem map translation ...[2024-11-20 12:19:34.665539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.257 [2024-11-20 12:19:34.665553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.257 [2024-11-20 12:19:34.665587] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.257 [2024-11-20 12:19:34.665593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.257 passed 00:05:29.257 Test: mem map registration ...[2024-11-20 12:19:34.703308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:29.257 [2024-11-20 12:19:34.703322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:29.257 passed 00:05:29.257 Test: mem map adjacent registrations ...passed 00:05:29.257 00:05:29.257 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.257 suites 1 1 n/a 0 0 00:05:29.257 tests 4 4 4 0 0 00:05:29.257 asserts 152 152 152 0 n/a 00:05:29.257 00:05:29.257 Elapsed time = 0.131 seconds 00:05:29.257 00:05:29.257 real 0m0.144s 00:05:29.257 user 0m0.134s 00:05:29.257 sys 0m0.010s 00:05:29.257 12:19:34 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.257 12:19:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:29.257 ************************************ 00:05:29.257 END TEST env_memory 00:05:29.257 ************************************ 00:05:29.257 12:19:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.257 12:19:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.257 12:19:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.257 12:19:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.257 ************************************ 00:05:29.257 START TEST env_vtophys 00:05:29.257 ************************************ 00:05:29.257 12:19:34 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.257 EAL: lib.eal log level changed from notice to debug 00:05:29.257 EAL: Detected lcore 0 as core 0 on socket 0 00:05:29.257 EAL: Detected lcore 1 as core 1 on socket 0 00:05:29.257 EAL: Detected lcore 2 as core 2 on socket 0 00:05:29.257 EAL: Detected lcore 3 as core 3 on socket 0 00:05:29.257 EAL: Detected lcore 4 as core 4 on socket 0 00:05:29.257 EAL: Detected lcore 5 as core 5 on socket 0 00:05:29.257 EAL: Detected lcore 6 as core 6 on socket 0 00:05:29.257 EAL: Detected lcore 7 as core 8 on socket 0 00:05:29.257 EAL: Detected lcore 8 as core 9 on socket 0 00:05:29.257 EAL: Detected lcore 9 as core 10 on socket 0 00:05:29.257 EAL: Detected lcore 10 as core 11 on socket 0 00:05:29.257 EAL: Detected lcore 11 as core 12 on socket 0 00:05:29.257 EAL: Detected lcore 12 as core 13 on socket 0 00:05:29.257 EAL: Detected lcore 13 as core 14 on socket 0 00:05:29.257 EAL: Detected lcore 14 as core 16 on socket 0 00:05:29.257 EAL: Detected lcore 15 as core 17 on socket 0 00:05:29.257 EAL: Detected lcore 16 as core 18 on socket 0 00:05:29.257 EAL: Detected lcore 17 as core 19 on socket 0 00:05:29.257 EAL: Detected lcore 18 as core 20 on socket 0 00:05:29.257 EAL: Detected lcore 19 as core 21 on socket 0 00:05:29.257 EAL: Detected lcore 20 as core 22 on socket 0 00:05:29.257 EAL: Detected lcore 21 as core 24 on socket 0 00:05:29.257 EAL: Detected lcore 22 as core 25 on socket 0 00:05:29.257 EAL: Detected lcore 23 as core 26 on socket 0 00:05:29.257 EAL: Detected lcore 24 as core 27 on socket 0 00:05:29.257 EAL: Detected lcore 25 as core 28 on socket 0 00:05:29.257 EAL: Detected lcore 26 as core 29 on socket 0 00:05:29.257 EAL: Detected lcore 27 as core 30 on socket 0 00:05:29.257 EAL: Detected lcore 28 as core 0 on socket 1 00:05:29.257 EAL: Detected lcore 29 as core 1 on socket 1 00:05:29.257 EAL: Detected lcore 30 as core 2 on socket 1 00:05:29.257 EAL: Detected lcore 31 as core 3 on socket 1 00:05:29.257 EAL: Detected lcore 32 as core 4 on socket 1 00:05:29.257 EAL: Detected lcore 33 as core 5 on socket 1 00:05:29.257 EAL: Detected lcore 34 as core 6 on socket 1 00:05:29.257 EAL: Detected lcore 35 as core 8 on socket 1 00:05:29.257 EAL: Detected lcore 36 as core 9 on socket 1 00:05:29.257 EAL: Detected lcore 37 as core 10 on socket 1 00:05:29.257 EAL: Detected lcore 38 as core 11 on socket 1 00:05:29.257 EAL: Detected lcore 39 as core 12 on socket 1 00:05:29.257 EAL: Detected lcore 40 as core 13 on socket 1 00:05:29.257 EAL: Detected lcore 41 as core 14 on socket 1 00:05:29.257 EAL: Detected lcore 42 as core 16 on socket 1 00:05:29.257 EAL: Detected lcore 43 as core 17 on socket 1 00:05:29.257 EAL: Detected lcore 44 as core 18 on socket 1 00:05:29.257 EAL: Detected lcore 45 as core 19 on socket 1 00:05:29.257 EAL: Detected lcore 46 as core 20 on socket 1 00:05:29.257 EAL: Detected lcore 47 as core 21 on socket 1 00:05:29.257 EAL: Detected lcore 48 as core 22 on socket 1 00:05:29.257 EAL: Detected lcore 49 as core 24 on socket 1 00:05:29.257 EAL: Detected lcore 50 as core 25 on socket 1 00:05:29.257 EAL: Detected lcore 51 as core 26 on socket 1 00:05:29.257 EAL: Detected lcore 52 as core 27 on socket 1 00:05:29.257 EAL: Detected lcore 53 as core 28 on socket 1 00:05:29.257 EAL: Detected lcore 54 as core 29 on socket 1 00:05:29.257 EAL: Detected lcore 55 as core 30 on socket 1 00:05:29.257 EAL: Detected lcore 56 as core 0 on socket 0 00:05:29.257 EAL: Detected lcore 57 as core 1 on socket 0 00:05:29.257 EAL: Detected lcore 58 as core 2 on socket 0 00:05:29.257 EAL: Detected lcore 59 as core 3 on socket 0 00:05:29.257 EAL: Detected lcore 60 as core 4 on socket 0 00:05:29.257 EAL: Detected lcore 61 as core 5 on socket 0 00:05:29.257 EAL: Detected lcore 62 as core 6 on socket 0 00:05:29.257 EAL: Detected lcore 63 as core 8 on socket 0 00:05:29.257 EAL: Detected lcore 64 as core 9 on socket 0 00:05:29.257 EAL: Detected lcore 65 as core 10 on socket 0 00:05:29.257 EAL: Detected lcore 66 as core 11 on socket 0 00:05:29.257 EAL: Detected lcore 67 as core 12 on socket 0 00:05:29.257 EAL: Detected lcore 68 as core 13 on socket 0 00:05:29.257 EAL: Detected lcore 69 as core 14 on socket 0 00:05:29.257 EAL: Detected lcore 70 as core 16 on socket 0 00:05:29.257 EAL: Detected lcore 71 as core 17 on socket 0 00:05:29.257 EAL: Detected lcore 72 as core 18 on socket 0 00:05:29.257 EAL: Detected lcore 73 as core 19 on socket 0 00:05:29.257 EAL: Detected lcore 74 as core 20 on socket 0 00:05:29.257 EAL: Detected lcore 75 as core 21 on socket 0 00:05:29.257 EAL: Detected lcore 76 as core 22 on socket 0 00:05:29.257 EAL: Detected lcore 77 as core 24 on socket 0 00:05:29.257 EAL: Detected lcore 78 as core 25 on socket 0 00:05:29.257 EAL: Detected lcore 79 as core 26 on socket 0 00:05:29.257 EAL: Detected lcore 80 as core 27 on socket 0 00:05:29.257 EAL: Detected lcore 81 as core 28 on socket 0 00:05:29.257 EAL: Detected lcore 82 as core 29 on socket 0 00:05:29.257 EAL: Detected lcore 83 as core 30 on socket 0 00:05:29.257 EAL: Detected lcore 84 as core 0 on socket 1 00:05:29.257 EAL: Detected lcore 85 as core 1 on socket 1 00:05:29.257 EAL: Detected lcore 86 as core 2 on socket 1 00:05:29.257 EAL: Detected lcore 87 as core 3 on socket 1 00:05:29.257 EAL: Detected lcore 88 as core 4 on socket 1 00:05:29.257 EAL: Detected lcore 89 as core 5 on socket 1 00:05:29.257 EAL: Detected lcore 90 as core 6 on socket 1 00:05:29.257 EAL: Detected lcore 91 as core 8 on socket 1 00:05:29.257 EAL: Detected lcore 92 as core 9 on socket 1 00:05:29.257 EAL: Detected lcore 93 as core 10 on socket 1 00:05:29.257 EAL: Detected lcore 94 as core 11 on socket 1 00:05:29.257 EAL: Detected lcore 95 as core 12 on socket 1 00:05:29.257 EAL: Detected lcore 96 as core 13 on socket 1 00:05:29.257 EAL: Detected lcore 97 as core 14 on socket 1 00:05:29.257 EAL: Detected lcore 98 as core 16 on socket 1 00:05:29.257 EAL: Detected lcore 99 as core 17 on socket 1 00:05:29.257 EAL: Detected lcore 100 as core 18 on socket 1 00:05:29.257 EAL: Detected lcore 101 as core 19 on socket 1 00:05:29.257 EAL: Detected lcore 102 as core 20 on socket 1 00:05:29.257 EAL: Detected lcore 103 as core 21 on socket 1 00:05:29.257 EAL: Detected lcore 104 as core 22 on socket 1 00:05:29.257 EAL: Detected lcore 105 as core 24 on socket 1 00:05:29.257 EAL: Detected lcore 106 as core 25 on socket 1 00:05:29.257 EAL: Detected lcore 107 as core 26 on socket 1 00:05:29.258 EAL: Detected lcore 108 as core 27 on socket 1 00:05:29.258 EAL: Detected lcore 109 as core 28 on socket 1 00:05:29.258 EAL: Detected lcore 110 as core 29 on socket 1 00:05:29.258 EAL: Detected lcore 111 as core 30 on socket 1 00:05:29.258 EAL: Maximum logical cores by configuration: 128 00:05:29.258 EAL: Detected CPU lcores: 112 00:05:29.258 EAL: Detected NUMA nodes: 2 00:05:29.258 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:29.258 EAL: Detected shared linkage of DPDK 00:05:29.258 EAL: No shared files mode enabled, IPC will be disabled 00:05:29.258 EAL: Bus pci wants IOVA as 'DC' 00:05:29.258 EAL: Buses did not request a specific IOVA mode. 00:05:29.258 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:29.258 EAL: Selected IOVA mode 'VA' 00:05:29.258 EAL: Probing VFIO support... 00:05:29.258 EAL: IOMMU type 1 (Type 1) is supported 00:05:29.258 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:29.258 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:29.258 EAL: VFIO support initialized 00:05:29.258 EAL: Ask a virtual area of 0x2e000 bytes 00:05:29.258 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:29.258 EAL: Setting up physically contiguous memory... 00:05:29.258 EAL: Setting maximum number of open files to 524288 00:05:29.258 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:29.258 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:29.258 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:29.258 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:29.258 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.258 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:29.258 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.258 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.258 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:29.258 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:29.258 EAL: Hugepages will be freed exactly as allocated. 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: TSC frequency is ~2200000 KHz 00:05:29.258 EAL: Main lcore 0 is ready (tid=7fab2f908a00;cpuset=[0]) 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 0 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 2MB 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:29.258 EAL: Mem event callback 'spdk:(nil)' registered 00:05:29.258 00:05:29.258 00:05:29.258 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.258 http://cunit.sourceforge.net/ 00:05:29.258 00:05:29.258 00:05:29.258 Suite: components_suite 00:05:29.258 Test: vtophys_malloc_test ...passed 00:05:29.258 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 4MB 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was shrunk by 4MB 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 6MB 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was shrunk by 6MB 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 10MB 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was shrunk by 10MB 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 18MB 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was shrunk by 18MB 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 34MB 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 66MB 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was shrunk by 66MB 00:05:29.258 EAL: Trying to obtain current memory policy. 00:05:29.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.258 EAL: Restoring previous memory policy: 4 00:05:29.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.258 EAL: request: mp_malloc_sync 00:05:29.258 EAL: No shared files mode enabled, IPC is disabled 00:05:29.258 EAL: Heap on socket 0 was expanded by 130MB 00:05:29.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.545 EAL: request: mp_malloc_sync 00:05:29.545 EAL: No shared files mode enabled, IPC is disabled 00:05:29.545 EAL: Heap on socket 0 was shrunk by 130MB 00:05:29.545 EAL: Trying to obtain current memory policy. 00:05:29.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.545 EAL: Restoring previous memory policy: 4 00:05:29.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.545 EAL: request: mp_malloc_sync 00:05:29.545 EAL: No shared files mode enabled, IPC is disabled 00:05:29.545 EAL: Heap on socket 0 was expanded by 258MB 00:05:29.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.545 EAL: request: mp_malloc_sync 00:05:29.545 EAL: No shared files mode enabled, IPC is disabled 00:05:29.545 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.545 EAL: Trying to obtain current memory policy. 00:05:29.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.545 EAL: Restoring previous memory policy: 4 00:05:29.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.545 EAL: request: mp_malloc_sync 00:05:29.545 EAL: No shared files mode enabled, IPC is disabled 00:05:29.545 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.804 EAL: request: mp_malloc_sync 00:05:29.804 EAL: No shared files mode enabled, IPC is disabled 00:05:29.804 EAL: Heap on socket 0 was shrunk by 514MB 00:05:29.804 EAL: Trying to obtain current memory policy. 00:05:29.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.063 EAL: Restoring previous memory policy: 4 00:05:30.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.063 EAL: request: mp_malloc_sync 00:05:30.063 EAL: No shared files mode enabled, IPC is disabled 00:05:30.063 EAL: Heap on socket 0 was expanded by 1026MB 00:05:30.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.322 EAL: request: mp_malloc_sync 00:05:30.322 EAL: No shared files mode enabled, IPC is disabled 00:05:30.322 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:30.322 passed 00:05:30.322 00:05:30.322 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.322 suites 1 1 n/a 0 0 00:05:30.322 tests 2 2 2 0 0 00:05:30.322 asserts 497 497 497 0 n/a 00:05:30.322 00:05:30.322 Elapsed time = 0.957 seconds 00:05:30.322 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.322 EAL: request: mp_malloc_sync 00:05:30.322 EAL: No shared files mode enabled, IPC is disabled 00:05:30.322 EAL: Heap on socket 0 was shrunk by 2MB 00:05:30.322 EAL: No shared files mode enabled, IPC is disabled 00:05:30.322 EAL: No shared files mode enabled, IPC is disabled 00:05:30.322 EAL: No shared files mode enabled, IPC is disabled 00:05:30.322 00:05:30.322 real 0m1.086s 00:05:30.322 user 0m0.646s 00:05:30.322 sys 0m0.417s 00:05:30.322 12:19:35 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.322 12:19:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:30.322 ************************************ 00:05:30.322 END TEST env_vtophys 00:05:30.322 ************************************ 00:05:30.322 12:19:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.322 12:19:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.322 12:19:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.322 12:19:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.322 ************************************ 00:05:30.322 START TEST env_pci 00:05:30.322 ************************************ 00:05:30.322 12:19:35 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.322 00:05:30.322 00:05:30.322 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.322 http://cunit.sourceforge.net/ 00:05:30.322 00:05:30.322 00:05:30.322 Suite: pci 00:05:30.322 Test: pci_hook ...[2024-11-20 12:19:35.999044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 713665 has claimed it 00:05:30.322 EAL: Cannot find device (10000:00:01.0) 00:05:30.322 EAL: Failed to attach device on primary process 00:05:30.322 passed 00:05:30.322 00:05:30.322 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.322 suites 1 1 n/a 0 0 00:05:30.322 tests 1 1 1 0 0 00:05:30.322 asserts 25 25 25 0 n/a 00:05:30.323 00:05:30.323 Elapsed time = 0.029 seconds 00:05:30.323 00:05:30.323 real 0m0.050s 00:05:30.323 user 0m0.016s 00:05:30.323 sys 0m0.034s 00:05:30.323 12:19:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.323 12:19:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:30.323 ************************************ 00:05:30.323 END TEST env_pci 00:05:30.323 ************************************ 00:05:30.323 12:19:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:30.323 12:19:36 env -- env/env.sh@15 -- # uname 00:05:30.323 12:19:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:30.323 12:19:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:30.323 12:19:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.323 12:19:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:30.323 12:19:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.323 12:19:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.582 ************************************ 00:05:30.582 START TEST env_dpdk_post_init 00:05:30.582 ************************************ 00:05:30.582 12:19:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.582 EAL: Detected CPU lcores: 112 00:05:30.582 EAL: Detected NUMA nodes: 2 00:05:30.582 EAL: Detected shared linkage of DPDK 00:05:30.582 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.582 EAL: Selected IOVA mode 'VA' 00:05:30.582 EAL: VFIO support initialized 00:05:30.582 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.582 EAL: Using IOMMU type 1 (Type 1) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:30.582 EAL: Ignore mapping IO port bar(1) 00:05:30.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:31.520 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:32.090 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:32.090 EAL: Ignore mapping IO port bar(1) 00:05:32.090 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:32.090 EAL: Ignore mapping IO port bar(1) 00:05:32.090 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:32.090 EAL: Ignore mapping IO port bar(1) 00:05:32.090 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:32.349 EAL: Ignore mapping IO port bar(1) 00:05:32.349 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:32.349 EAL: Ignore mapping IO port bar(1) 00:05:32.349 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:32.349 EAL: Ignore mapping IO port bar(1) 00:05:32.349 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:32.349 EAL: Ignore mapping IO port bar(1) 00:05:32.349 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:32.349 EAL: Ignore mapping IO port bar(1) 00:05:32.349 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:32.918 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:33.857 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d9:00.0 (socket 1) 00:05:37.147 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:37.147 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001048000 00:05:37.716 EAL: Releasing PCI mapped resource for 0000:d9:00.0 00:05:37.716 EAL: Calling pci_unmap_resource for 0000:d9:00.0 at 0x20200104c000 00:05:37.975 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:37.975 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:38.544 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:38.544 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001024000 00:05:38.803 Starting DPDK initialization... 00:05:38.803 Starting SPDK post initialization... 00:05:38.803 SPDK NVMe probe 00:05:38.803 Attaching to 0000:5e:00.0 00:05:38.803 Attaching to 0000:5f:00.0 00:05:38.803 Attaching to 0000:d8:00.0 00:05:38.803 Attaching to 0000:d9:00.0 00:05:38.803 Attached to 0000:5e:00.0 00:05:38.803 Attached to 0000:5f:00.0 00:05:38.803 Attached to 0000:d8:00.0 00:05:38.803 Attached to 0000:d9:00.0 00:05:38.803 Cleaning up... 00:05:38.803 00:05:38.803 real 0m8.269s 00:05:38.803 user 0m3.744s 00:05:38.803 sys 0m1.392s 00:05:38.803 12:19:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.803 12:19:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.803 ************************************ 00:05:38.803 END TEST env_dpdk_post_init 00:05:38.803 ************************************ 00:05:38.803 12:19:44 env -- env/env.sh@26 -- # uname 00:05:38.803 12:19:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:38.803 12:19:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.803 12:19:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.803 12:19:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.803 12:19:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.803 ************************************ 00:05:38.803 START TEST env_mem_callbacks 00:05:38.803 ************************************ 00:05:38.803 12:19:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.803 EAL: Detected CPU lcores: 112 00:05:38.803 EAL: Detected NUMA nodes: 2 00:05:38.803 EAL: Detected shared linkage of DPDK 00:05:38.803 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.803 EAL: Selected IOVA mode 'VA' 00:05:38.803 EAL: VFIO support initialized 00:05:38.803 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.803 00:05:38.803 00:05:38.803 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.803 http://cunit.sourceforge.net/ 00:05:38.803 00:05:38.803 00:05:38.803 Suite: memory 00:05:38.803 Test: test ... 00:05:38.803 register 0x200000200000 2097152 00:05:38.803 malloc 3145728 00:05:38.803 register 0x200000400000 4194304 00:05:38.803 buf 0x200000500000 len 3145728 PASSED 00:05:38.803 malloc 64 00:05:38.803 buf 0x2000004fff40 len 64 PASSED 00:05:38.803 malloc 4194304 00:05:38.803 register 0x200000800000 6291456 00:05:38.803 buf 0x200000a00000 len 4194304 PASSED 00:05:38.803 free 0x200000500000 3145728 00:05:38.803 free 0x2000004fff40 64 00:05:38.803 unregister 0x200000400000 4194304 PASSED 00:05:38.803 free 0x200000a00000 4194304 00:05:38.803 unregister 0x200000800000 6291456 PASSED 00:05:38.803 malloc 8388608 00:05:38.803 register 0x200000400000 10485760 00:05:38.803 buf 0x200000600000 len 8388608 PASSED 00:05:38.803 free 0x200000600000 8388608 00:05:38.803 unregister 0x200000400000 10485760 PASSED 00:05:38.803 passed 00:05:38.803 00:05:38.803 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.803 suites 1 1 n/a 0 0 00:05:38.803 tests 1 1 1 0 0 00:05:38.803 asserts 15 15 15 0 n/a 00:05:38.803 00:05:38.803 Elapsed time = 0.008 seconds 00:05:38.803 00:05:38.803 real 0m0.060s 00:05:38.803 user 0m0.025s 00:05:38.803 sys 0m0.035s 00:05:38.803 12:19:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.803 12:19:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:38.803 ************************************ 00:05:38.803 END TEST env_mem_callbacks 00:05:38.803 ************************************ 00:05:38.803 00:05:38.803 real 0m10.157s 00:05:38.803 user 0m4.798s 00:05:38.803 sys 0m2.239s 00:05:38.803 12:19:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.803 12:19:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.803 ************************************ 00:05:38.803 END TEST env 00:05:38.803 ************************************ 00:05:39.065 12:19:44 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:39.065 12:19:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.065 12:19:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.065 12:19:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.065 ************************************ 00:05:39.065 START TEST rpc 00:05:39.065 ************************************ 00:05:39.065 12:19:44 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:39.065 * Looking for test storage... 00:05:39.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.065 12:19:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.065 12:19:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.065 12:19:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.065 12:19:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.065 12:19:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.065 12:19:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.065 12:19:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.066 12:19:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.066 12:19:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.066 12:19:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.066 12:19:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.066 12:19:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.066 12:19:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.066 12:19:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.066 12:19:44 rpc -- scripts/common.sh@345 -- # : 1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.066 12:19:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.066 12:19:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.066 12:19:44 rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.066 12:19:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.066 12:19:44 rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.066 12:19:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.066 12:19:44 rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.066 12:19:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.066 12:19:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.066 12:19:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.066 12:19:44 rpc -- scripts/common.sh@368 -- # return 0 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.066 --rc genhtml_branch_coverage=1 00:05:39.066 --rc genhtml_function_coverage=1 00:05:39.066 --rc genhtml_legend=1 00:05:39.066 --rc geninfo_all_blocks=1 00:05:39.066 --rc geninfo_unexecuted_blocks=1 00:05:39.066 00:05:39.066 ' 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.066 --rc genhtml_branch_coverage=1 00:05:39.066 --rc genhtml_function_coverage=1 00:05:39.066 --rc genhtml_legend=1 00:05:39.066 --rc geninfo_all_blocks=1 00:05:39.066 --rc geninfo_unexecuted_blocks=1 00:05:39.066 00:05:39.066 ' 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.066 --rc genhtml_branch_coverage=1 00:05:39.066 --rc genhtml_function_coverage=1 00:05:39.066 --rc genhtml_legend=1 00:05:39.066 --rc geninfo_all_blocks=1 00:05:39.066 --rc geninfo_unexecuted_blocks=1 00:05:39.066 00:05:39.066 ' 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.066 --rc genhtml_branch_coverage=1 00:05:39.066 --rc genhtml_function_coverage=1 00:05:39.066 --rc genhtml_legend=1 00:05:39.066 --rc geninfo_all_blocks=1 00:05:39.066 --rc geninfo_unexecuted_blocks=1 00:05:39.066 00:05:39.066 ' 00:05:39.066 12:19:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=715470 00:05:39.066 12:19:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.066 12:19:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:39.066 12:19:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 715470 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 715470 ']' 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.066 12:19:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.327 [2024-11-20 12:19:44.861526] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:39.327 [2024-11-20 12:19:44.861571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715470 ] 00:05:39.327 [2024-11-20 12:19:44.932253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.327 [2024-11-20 12:19:44.972198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:39.327 [2024-11-20 12:19:44.972230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 715470' to capture a snapshot of events at runtime. 00:05:39.327 [2024-11-20 12:19:44.972237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.327 [2024-11-20 12:19:44.972243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.327 [2024-11-20 12:19:44.972248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid715470 for offline analysis/debug. 00:05:39.327 [2024-11-20 12:19:44.972836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.586 12:19:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.586 12:19:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.586 12:19:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.586 12:19:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.586 12:19:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.586 12:19:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.586 12:19:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.586 12:19:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.586 12:19:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 ************************************ 00:05:39.586 START TEST rpc_integrity 00:05:39.586 ************************************ 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.586 { 00:05:39.586 "name": "Malloc0", 00:05:39.586 "aliases": [ 00:05:39.586 "2d9a372f-93d2-41ba-a687-539cd08aa407" 00:05:39.586 ], 00:05:39.586 "product_name": "Malloc disk", 00:05:39.586 "block_size": 512, 00:05:39.586 "num_blocks": 16384, 00:05:39.586 "uuid": "2d9a372f-93d2-41ba-a687-539cd08aa407", 00:05:39.586 "assigned_rate_limits": { 00:05:39.586 "rw_ios_per_sec": 0, 00:05:39.586 "rw_mbytes_per_sec": 0, 00:05:39.586 "r_mbytes_per_sec": 0, 00:05:39.586 "w_mbytes_per_sec": 0 00:05:39.586 }, 00:05:39.586 "claimed": false, 00:05:39.586 "zoned": false, 00:05:39.586 "supported_io_types": { 00:05:39.586 "read": true, 00:05:39.586 "write": true, 00:05:39.586 "unmap": true, 00:05:39.586 "flush": true, 00:05:39.586 "reset": true, 00:05:39.586 "nvme_admin": false, 00:05:39.586 "nvme_io": false, 00:05:39.586 "nvme_io_md": false, 00:05:39.586 "write_zeroes": true, 00:05:39.586 "zcopy": true, 00:05:39.586 "get_zone_info": false, 00:05:39.586 "zone_management": false, 00:05:39.586 "zone_append": false, 00:05:39.586 "compare": false, 00:05:39.586 "compare_and_write": false, 00:05:39.586 "abort": true, 00:05:39.586 "seek_hole": false, 00:05:39.586 "seek_data": false, 00:05:39.586 "copy": true, 00:05:39.586 "nvme_iov_md": false 00:05:39.586 }, 00:05:39.586 "memory_domains": [ 00:05:39.586 { 00:05:39.586 "dma_device_id": "system", 00:05:39.586 "dma_device_type": 1 00:05:39.586 }, 00:05:39.586 { 00:05:39.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.586 "dma_device_type": 2 00:05:39.586 } 00:05:39.586 ], 00:05:39.586 "driver_specific": {} 00:05:39.586 } 00:05:39.586 ]' 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.586 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.586 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.846 [2024-11-20 12:19:45.351017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.846 [2024-11-20 12:19:45.351044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.846 [2024-11-20 12:19:45.351057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x786b20 00:05:39.846 [2024-11-20 12:19:45.351063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.846 [2024-11-20 12:19:45.352063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.846 [2024-11-20 12:19:45.352082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.846 Passthru0 00:05:39.846 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.846 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.846 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.846 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.846 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.846 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.846 { 00:05:39.846 "name": "Malloc0", 00:05:39.846 "aliases": [ 00:05:39.846 "2d9a372f-93d2-41ba-a687-539cd08aa407" 00:05:39.846 ], 00:05:39.846 "product_name": "Malloc disk", 00:05:39.846 "block_size": 512, 00:05:39.846 "num_blocks": 16384, 00:05:39.846 "uuid": "2d9a372f-93d2-41ba-a687-539cd08aa407", 00:05:39.846 "assigned_rate_limits": { 00:05:39.846 "rw_ios_per_sec": 0, 00:05:39.846 "rw_mbytes_per_sec": 0, 00:05:39.846 "r_mbytes_per_sec": 0, 00:05:39.846 "w_mbytes_per_sec": 0 00:05:39.846 }, 00:05:39.846 "claimed": true, 00:05:39.846 "claim_type": "exclusive_write", 00:05:39.846 "zoned": false, 00:05:39.846 "supported_io_types": { 00:05:39.846 "read": true, 00:05:39.846 "write": true, 00:05:39.846 "unmap": true, 00:05:39.846 "flush": true, 00:05:39.846 "reset": true, 00:05:39.846 "nvme_admin": false, 00:05:39.846 "nvme_io": false, 00:05:39.846 "nvme_io_md": false, 00:05:39.846 "write_zeroes": true, 00:05:39.846 "zcopy": true, 00:05:39.846 "get_zone_info": false, 00:05:39.846 "zone_management": false, 00:05:39.846 "zone_append": false, 00:05:39.846 "compare": false, 00:05:39.846 "compare_and_write": false, 00:05:39.846 "abort": true, 00:05:39.846 "seek_hole": false, 00:05:39.846 "seek_data": false, 00:05:39.846 "copy": true, 00:05:39.846 "nvme_iov_md": false 00:05:39.846 }, 00:05:39.846 "memory_domains": [ 00:05:39.846 { 00:05:39.846 "dma_device_id": "system", 00:05:39.846 "dma_device_type": 1 00:05:39.846 }, 00:05:39.846 { 00:05:39.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.846 "dma_device_type": 2 00:05:39.846 } 00:05:39.846 ], 00:05:39.846 "driver_specific": {} 00:05:39.846 }, 00:05:39.846 { 00:05:39.846 "name": "Passthru0", 00:05:39.846 "aliases": [ 00:05:39.846 "b1e6cd84-9cd4-5b8b-8906-cf4ee055d6ba" 00:05:39.846 ], 00:05:39.846 "product_name": "passthru", 00:05:39.846 "block_size": 512, 00:05:39.846 "num_blocks": 16384, 00:05:39.846 "uuid": "b1e6cd84-9cd4-5b8b-8906-cf4ee055d6ba", 00:05:39.846 "assigned_rate_limits": { 00:05:39.846 "rw_ios_per_sec": 0, 00:05:39.846 "rw_mbytes_per_sec": 0, 00:05:39.846 "r_mbytes_per_sec": 0, 00:05:39.846 "w_mbytes_per_sec": 0 00:05:39.846 }, 00:05:39.846 "claimed": false, 00:05:39.846 "zoned": false, 00:05:39.846 "supported_io_types": { 00:05:39.846 "read": true, 00:05:39.846 "write": true, 00:05:39.846 "unmap": true, 00:05:39.846 "flush": true, 00:05:39.846 "reset": true, 00:05:39.846 "nvme_admin": false, 00:05:39.846 "nvme_io": false, 00:05:39.846 "nvme_io_md": false, 00:05:39.846 "write_zeroes": true, 00:05:39.846 "zcopy": true, 00:05:39.846 "get_zone_info": false, 00:05:39.847 "zone_management": false, 00:05:39.847 "zone_append": false, 00:05:39.847 "compare": false, 00:05:39.847 "compare_and_write": false, 00:05:39.847 "abort": true, 00:05:39.847 "seek_hole": false, 00:05:39.847 "seek_data": false, 00:05:39.847 "copy": true, 00:05:39.847 "nvme_iov_md": false 00:05:39.847 }, 00:05:39.847 "memory_domains": [ 00:05:39.847 { 00:05:39.847 "dma_device_id": "system", 00:05:39.847 "dma_device_type": 1 00:05:39.847 }, 00:05:39.847 { 00:05:39.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.847 "dma_device_type": 2 00:05:39.847 } 00:05:39.847 ], 00:05:39.847 "driver_specific": { 00:05:39.847 "passthru": { 00:05:39.847 "name": "Passthru0", 00:05:39.847 "base_bdev_name": "Malloc0" 00:05:39.847 } 00:05:39.847 } 00:05:39.847 } 00:05:39.847 ]' 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.847 12:19:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.847 00:05:39.847 real 0m0.271s 00:05:39.847 user 0m0.171s 00:05:39.847 sys 0m0.037s 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.847 12:19:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 ************************************ 00:05:39.847 END TEST rpc_integrity 00:05:39.847 ************************************ 00:05:39.847 12:19:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.847 12:19:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.847 12:19:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.847 12:19:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 ************************************ 00:05:39.847 START TEST rpc_plugins 00:05:39.847 ************************************ 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:39.847 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.847 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.847 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.847 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.847 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.847 { 00:05:39.847 "name": "Malloc1", 00:05:39.847 "aliases": [ 00:05:39.847 "7885229a-4207-478a-87ab-33fbd2954f7d" 00:05:39.847 ], 00:05:39.847 "product_name": "Malloc disk", 00:05:39.847 "block_size": 4096, 00:05:39.847 "num_blocks": 256, 00:05:39.847 "uuid": "7885229a-4207-478a-87ab-33fbd2954f7d", 00:05:39.847 "assigned_rate_limits": { 00:05:39.847 "rw_ios_per_sec": 0, 00:05:39.847 "rw_mbytes_per_sec": 0, 00:05:39.847 "r_mbytes_per_sec": 0, 00:05:39.847 "w_mbytes_per_sec": 0 00:05:39.847 }, 00:05:39.847 "claimed": false, 00:05:39.847 "zoned": false, 00:05:39.847 "supported_io_types": { 00:05:39.847 "read": true, 00:05:39.847 "write": true, 00:05:39.847 "unmap": true, 00:05:39.847 "flush": true, 00:05:39.847 "reset": true, 00:05:39.847 "nvme_admin": false, 00:05:39.847 "nvme_io": false, 00:05:39.847 "nvme_io_md": false, 00:05:39.847 "write_zeroes": true, 00:05:39.847 "zcopy": true, 00:05:39.847 "get_zone_info": false, 00:05:39.847 "zone_management": false, 00:05:39.847 "zone_append": false, 00:05:39.847 "compare": false, 00:05:39.847 "compare_and_write": false, 00:05:39.847 "abort": true, 00:05:39.847 "seek_hole": false, 00:05:39.847 "seek_data": false, 00:05:39.847 "copy": true, 00:05:39.847 "nvme_iov_md": false 00:05:39.847 }, 00:05:39.847 "memory_domains": [ 00:05:39.847 { 00:05:39.847 "dma_device_id": "system", 00:05:39.847 "dma_device_type": 1 00:05:39.847 }, 00:05:39.847 { 00:05:39.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.847 "dma_device_type": 2 00:05:39.847 } 00:05:39.847 ], 00:05:39.847 "driver_specific": {} 00:05:39.847 } 00:05:39.847 ]' 00:05:39.847 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:40.107 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:40.107 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.107 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.107 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:40.107 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:40.107 12:19:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:40.107 00:05:40.107 real 0m0.142s 00:05:40.107 user 0m0.090s 00:05:40.107 sys 0m0.016s 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.107 12:19:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.107 ************************************ 00:05:40.107 END TEST rpc_plugins 00:05:40.107 ************************************ 00:05:40.107 12:19:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:40.107 12:19:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.107 12:19:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.107 12:19:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.107 ************************************ 00:05:40.107 START TEST rpc_trace_cmd_test 00:05:40.107 ************************************ 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:40.107 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid715470", 00:05:40.107 "tpoint_group_mask": "0x8", 00:05:40.107 "iscsi_conn": { 00:05:40.107 "mask": "0x2", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "scsi": { 00:05:40.107 "mask": "0x4", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "bdev": { 00:05:40.107 "mask": "0x8", 00:05:40.107 "tpoint_mask": "0xffffffffffffffff" 00:05:40.107 }, 00:05:40.107 "nvmf_rdma": { 00:05:40.107 "mask": "0x10", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "nvmf_tcp": { 00:05:40.107 "mask": "0x20", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "ftl": { 00:05:40.107 "mask": "0x40", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "blobfs": { 00:05:40.107 "mask": "0x80", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "dsa": { 00:05:40.107 "mask": "0x200", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "thread": { 00:05:40.107 "mask": "0x400", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "nvme_pcie": { 00:05:40.107 "mask": "0x800", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "iaa": { 00:05:40.107 "mask": "0x1000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "nvme_tcp": { 00:05:40.107 "mask": "0x2000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "bdev_nvme": { 00:05:40.107 "mask": "0x4000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "sock": { 00:05:40.107 "mask": "0x8000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "blob": { 00:05:40.107 "mask": "0x10000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "bdev_raid": { 00:05:40.107 "mask": "0x20000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 }, 00:05:40.107 "scheduler": { 00:05:40.107 "mask": "0x40000", 00:05:40.107 "tpoint_mask": "0x0" 00:05:40.107 } 00:05:40.107 }' 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:40.107 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:40.367 00:05:40.367 real 0m0.187s 00:05:40.367 user 0m0.157s 00:05:40.367 sys 0m0.023s 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.367 12:19:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.367 ************************************ 00:05:40.367 END TEST rpc_trace_cmd_test 00:05:40.367 ************************************ 00:05:40.367 12:19:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:40.367 12:19:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:40.367 12:19:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:40.367 12:19:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.367 12:19:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.367 12:19:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.367 ************************************ 00:05:40.367 START TEST rpc_daemon_integrity 00:05:40.367 ************************************ 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.367 { 00:05:40.367 "name": "Malloc2", 00:05:40.367 "aliases": [ 00:05:40.367 "fe0ff7b5-55fa-49ea-8c2a-17797e2fb832" 00:05:40.367 ], 00:05:40.367 "product_name": "Malloc disk", 00:05:40.367 "block_size": 512, 00:05:40.367 "num_blocks": 16384, 00:05:40.367 "uuid": "fe0ff7b5-55fa-49ea-8c2a-17797e2fb832", 00:05:40.367 "assigned_rate_limits": { 00:05:40.367 "rw_ios_per_sec": 0, 00:05:40.367 "rw_mbytes_per_sec": 0, 00:05:40.367 "r_mbytes_per_sec": 0, 00:05:40.367 "w_mbytes_per_sec": 0 00:05:40.367 }, 00:05:40.367 "claimed": false, 00:05:40.367 "zoned": false, 00:05:40.367 "supported_io_types": { 00:05:40.367 "read": true, 00:05:40.367 "write": true, 00:05:40.367 "unmap": true, 00:05:40.367 "flush": true, 00:05:40.367 "reset": true, 00:05:40.367 "nvme_admin": false, 00:05:40.367 "nvme_io": false, 00:05:40.367 "nvme_io_md": false, 00:05:40.367 "write_zeroes": true, 00:05:40.367 "zcopy": true, 00:05:40.367 "get_zone_info": false, 00:05:40.367 "zone_management": false, 00:05:40.367 "zone_append": false, 00:05:40.367 "compare": false, 00:05:40.367 "compare_and_write": false, 00:05:40.367 "abort": true, 00:05:40.367 "seek_hole": false, 00:05:40.367 "seek_data": false, 00:05:40.367 "copy": true, 00:05:40.367 "nvme_iov_md": false 00:05:40.367 }, 00:05:40.367 "memory_domains": [ 00:05:40.367 { 00:05:40.367 "dma_device_id": "system", 00:05:40.367 "dma_device_type": 1 00:05:40.367 }, 00:05:40.367 { 00:05:40.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.367 "dma_device_type": 2 00:05:40.367 } 00:05:40.367 ], 00:05:40.367 "driver_specific": {} 00:05:40.367 } 00:05:40.367 ]' 00:05:40.367 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 [2024-11-20 12:19:46.173235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:40.627 [2024-11-20 12:19:46.173260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.627 [2024-11-20 12:19:46.173271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x816fb0 00:05:40.627 [2024-11-20 12:19:46.173277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.627 [2024-11-20 12:19:46.174167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.627 [2024-11-20 12:19:46.174185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.627 Passthru0 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.627 { 00:05:40.627 "name": "Malloc2", 00:05:40.627 "aliases": [ 00:05:40.627 "fe0ff7b5-55fa-49ea-8c2a-17797e2fb832" 00:05:40.627 ], 00:05:40.627 "product_name": "Malloc disk", 00:05:40.627 "block_size": 512, 00:05:40.627 "num_blocks": 16384, 00:05:40.627 "uuid": "fe0ff7b5-55fa-49ea-8c2a-17797e2fb832", 00:05:40.627 "assigned_rate_limits": { 00:05:40.627 "rw_ios_per_sec": 0, 00:05:40.627 "rw_mbytes_per_sec": 0, 00:05:40.627 "r_mbytes_per_sec": 0, 00:05:40.627 "w_mbytes_per_sec": 0 00:05:40.627 }, 00:05:40.627 "claimed": true, 00:05:40.627 "claim_type": "exclusive_write", 00:05:40.627 "zoned": false, 00:05:40.627 "supported_io_types": { 00:05:40.627 "read": true, 00:05:40.627 "write": true, 00:05:40.627 "unmap": true, 00:05:40.627 "flush": true, 00:05:40.627 "reset": true, 00:05:40.627 "nvme_admin": false, 00:05:40.627 "nvme_io": false, 00:05:40.627 "nvme_io_md": false, 00:05:40.627 "write_zeroes": true, 00:05:40.627 "zcopy": true, 00:05:40.627 "get_zone_info": false, 00:05:40.627 "zone_management": false, 00:05:40.627 "zone_append": false, 00:05:40.627 "compare": false, 00:05:40.627 "compare_and_write": false, 00:05:40.627 "abort": true, 00:05:40.627 "seek_hole": false, 00:05:40.627 "seek_data": false, 00:05:40.627 "copy": true, 00:05:40.627 "nvme_iov_md": false 00:05:40.627 }, 00:05:40.627 "memory_domains": [ 00:05:40.627 { 00:05:40.627 "dma_device_id": "system", 00:05:40.627 "dma_device_type": 1 00:05:40.627 }, 00:05:40.627 { 00:05:40.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.627 "dma_device_type": 2 00:05:40.627 } 00:05:40.627 ], 00:05:40.627 "driver_specific": {} 00:05:40.627 }, 00:05:40.627 { 00:05:40.627 "name": "Passthru0", 00:05:40.627 "aliases": [ 00:05:40.627 "1b1828a1-ca91-549b-a69b-509f2d201b87" 00:05:40.627 ], 00:05:40.627 "product_name": "passthru", 00:05:40.627 "block_size": 512, 00:05:40.627 "num_blocks": 16384, 00:05:40.627 "uuid": "1b1828a1-ca91-549b-a69b-509f2d201b87", 00:05:40.627 "assigned_rate_limits": { 00:05:40.627 "rw_ios_per_sec": 0, 00:05:40.627 "rw_mbytes_per_sec": 0, 00:05:40.627 "r_mbytes_per_sec": 0, 00:05:40.627 "w_mbytes_per_sec": 0 00:05:40.627 }, 00:05:40.627 "claimed": false, 00:05:40.627 "zoned": false, 00:05:40.627 "supported_io_types": { 00:05:40.627 "read": true, 00:05:40.627 "write": true, 00:05:40.627 "unmap": true, 00:05:40.627 "flush": true, 00:05:40.627 "reset": true, 00:05:40.627 "nvme_admin": false, 00:05:40.627 "nvme_io": false, 00:05:40.627 "nvme_io_md": false, 00:05:40.627 "write_zeroes": true, 00:05:40.627 "zcopy": true, 00:05:40.627 "get_zone_info": false, 00:05:40.627 "zone_management": false, 00:05:40.627 "zone_append": false, 00:05:40.627 "compare": false, 00:05:40.627 "compare_and_write": false, 00:05:40.627 "abort": true, 00:05:40.627 "seek_hole": false, 00:05:40.627 "seek_data": false, 00:05:40.627 "copy": true, 00:05:40.627 "nvme_iov_md": false 00:05:40.627 }, 00:05:40.627 "memory_domains": [ 00:05:40.627 { 00:05:40.627 "dma_device_id": "system", 00:05:40.627 "dma_device_type": 1 00:05:40.627 }, 00:05:40.627 { 00:05:40.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.627 "dma_device_type": 2 00:05:40.627 } 00:05:40.627 ], 00:05:40.627 "driver_specific": { 00:05:40.627 "passthru": { 00:05:40.627 "name": "Passthru0", 00:05:40.627 "base_bdev_name": "Malloc2" 00:05:40.627 } 00:05:40.627 } 00:05:40.627 } 00:05:40.627 ]' 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.627 00:05:40.627 real 0m0.263s 00:05:40.627 user 0m0.164s 00:05:40.627 sys 0m0.039s 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.627 12:19:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 ************************************ 00:05:40.627 END TEST rpc_daemon_integrity 00:05:40.627 ************************************ 00:05:40.627 12:19:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.627 12:19:46 rpc -- rpc/rpc.sh@84 -- # killprocess 715470 00:05:40.627 12:19:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 715470 ']' 00:05:40.627 12:19:46 rpc -- common/autotest_common.sh@958 -- # kill -0 715470 00:05:40.627 12:19:46 rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.627 12:19:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.627 12:19:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715470 00:05:40.886 12:19:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.886 12:19:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.886 12:19:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715470' 00:05:40.886 killing process with pid 715470 00:05:40.886 12:19:46 rpc -- common/autotest_common.sh@973 -- # kill 715470 00:05:40.886 12:19:46 rpc -- common/autotest_common.sh@978 -- # wait 715470 00:05:41.146 00:05:41.146 real 0m2.053s 00:05:41.146 user 0m2.575s 00:05:41.146 sys 0m0.715s 00:05:41.146 12:19:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.146 12:19:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.146 ************************************ 00:05:41.146 END TEST rpc 00:05:41.146 ************************************ 00:05:41.146 12:19:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:41.146 12:19:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.146 12:19:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.146 12:19:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.146 ************************************ 00:05:41.146 START TEST skip_rpc 00:05:41.146 ************************************ 00:05:41.146 12:19:46 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:41.146 * Looking for test storage... 00:05:41.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.146 12:19:46 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.146 12:19:46 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.146 12:19:46 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.405 12:19:46 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.405 12:19:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.405 12:19:46 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.405 12:19:46 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.405 --rc genhtml_branch_coverage=1 00:05:41.405 --rc genhtml_function_coverage=1 00:05:41.405 --rc genhtml_legend=1 00:05:41.405 --rc geninfo_all_blocks=1 00:05:41.405 --rc geninfo_unexecuted_blocks=1 00:05:41.405 00:05:41.405 ' 00:05:41.405 12:19:46 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.406 --rc genhtml_branch_coverage=1 00:05:41.406 --rc genhtml_function_coverage=1 00:05:41.406 --rc genhtml_legend=1 00:05:41.406 --rc geninfo_all_blocks=1 00:05:41.406 --rc geninfo_unexecuted_blocks=1 00:05:41.406 00:05:41.406 ' 00:05:41.406 12:19:46 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.406 --rc genhtml_branch_coverage=1 00:05:41.406 --rc genhtml_function_coverage=1 00:05:41.406 --rc genhtml_legend=1 00:05:41.406 --rc geninfo_all_blocks=1 00:05:41.406 --rc geninfo_unexecuted_blocks=1 00:05:41.406 00:05:41.406 ' 00:05:41.406 12:19:46 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.406 --rc genhtml_branch_coverage=1 00:05:41.406 --rc genhtml_function_coverage=1 00:05:41.406 --rc genhtml_legend=1 00:05:41.406 --rc geninfo_all_blocks=1 00:05:41.406 --rc geninfo_unexecuted_blocks=1 00:05:41.406 00:05:41.406 ' 00:05:41.406 12:19:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.406 12:19:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:41.406 12:19:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:41.406 12:19:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.406 12:19:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.406 12:19:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.406 ************************************ 00:05:41.406 START TEST skip_rpc 00:05:41.406 ************************************ 00:05:41.406 12:19:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:41.406 12:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=716020 00:05:41.406 12:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.406 12:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:41.406 12:19:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:41.406 [2024-11-20 12:19:47.010223] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:41.406 [2024-11-20 12:19:47.010257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716020 ] 00:05:41.406 [2024-11-20 12:19:47.079104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.406 [2024-11-20 12:19:47.116690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 716020 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 716020 ']' 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 716020 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.681 12:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716020 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716020' 00:05:46.681 killing process with pid 716020 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 716020 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 716020 00:05:46.681 00:05:46.681 real 0m5.362s 00:05:46.681 user 0m5.109s 00:05:46.681 sys 0m0.293s 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.681 12:19:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.681 ************************************ 00:05:46.681 END TEST skip_rpc 00:05:46.681 ************************************ 00:05:46.681 12:19:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:46.681 12:19:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.681 12:19:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.681 12:19:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.681 ************************************ 00:05:46.681 START TEST skip_rpc_with_json 00:05:46.681 ************************************ 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=717093 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 717093 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 717093 ']' 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.681 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.940 [2024-11-20 12:19:52.444985] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:46.940 [2024-11-20 12:19:52.445022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717093 ] 00:05:46.940 [2024-11-20 12:19:52.514376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.940 [2024-11-20 12:19:52.553513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.199 [2024-11-20 12:19:52.763495] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:47.199 request: 00:05:47.199 { 00:05:47.199 "trtype": "tcp", 00:05:47.199 "method": "nvmf_get_transports", 00:05:47.199 "req_id": 1 00:05:47.199 } 00:05:47.199 Got JSON-RPC error response 00:05:47.199 response: 00:05:47.199 { 00:05:47.199 "code": -19, 00:05:47.199 "message": "No such device" 00:05:47.199 } 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.199 [2024-11-20 12:19:52.771592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.199 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.199 { 00:05:47.199 "subsystems": [ 00:05:47.199 { 00:05:47.199 "subsystem": "fsdev", 00:05:47.199 "config": [ 00:05:47.199 { 00:05:47.199 "method": "fsdev_set_opts", 00:05:47.199 "params": { 00:05:47.199 "fsdev_io_pool_size": 65535, 00:05:47.199 "fsdev_io_cache_size": 256 00:05:47.199 } 00:05:47.199 } 00:05:47.199 ] 00:05:47.199 }, 00:05:47.199 { 00:05:47.199 "subsystem": "vfio_user_target", 00:05:47.199 "config": null 00:05:47.199 }, 00:05:47.199 { 00:05:47.199 "subsystem": "keyring", 00:05:47.199 "config": [] 00:05:47.199 }, 00:05:47.199 { 00:05:47.199 "subsystem": "iobuf", 00:05:47.199 "config": [ 00:05:47.199 { 00:05:47.199 "method": "iobuf_set_options", 00:05:47.199 "params": { 00:05:47.199 "small_pool_count": 8192, 00:05:47.199 "large_pool_count": 1024, 00:05:47.199 "small_bufsize": 8192, 00:05:47.199 "large_bufsize": 135168, 00:05:47.199 "enable_numa": false 00:05:47.199 } 00:05:47.199 } 00:05:47.199 ] 00:05:47.199 }, 00:05:47.199 { 00:05:47.199 "subsystem": "sock", 00:05:47.199 "config": [ 00:05:47.199 { 00:05:47.199 "method": "sock_set_default_impl", 00:05:47.199 "params": { 00:05:47.199 "impl_name": "posix" 00:05:47.199 } 00:05:47.199 }, 00:05:47.199 { 00:05:47.199 "method": "sock_impl_set_options", 00:05:47.199 "params": { 00:05:47.199 "impl_name": "ssl", 00:05:47.200 "recv_buf_size": 4096, 00:05:47.200 "send_buf_size": 4096, 00:05:47.200 "enable_recv_pipe": true, 00:05:47.200 "enable_quickack": false, 00:05:47.200 "enable_placement_id": 0, 00:05:47.200 "enable_zerocopy_send_server": true, 00:05:47.200 "enable_zerocopy_send_client": false, 00:05:47.200 "zerocopy_threshold": 0, 00:05:47.200 "tls_version": 0, 00:05:47.200 "enable_ktls": false 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "sock_impl_set_options", 00:05:47.200 "params": { 00:05:47.200 "impl_name": "posix", 00:05:47.200 "recv_buf_size": 2097152, 00:05:47.200 "send_buf_size": 2097152, 00:05:47.200 "enable_recv_pipe": true, 00:05:47.200 "enable_quickack": false, 00:05:47.200 "enable_placement_id": 0, 00:05:47.200 "enable_zerocopy_send_server": true, 00:05:47.200 "enable_zerocopy_send_client": false, 00:05:47.200 "zerocopy_threshold": 0, 00:05:47.200 "tls_version": 0, 00:05:47.200 "enable_ktls": false 00:05:47.200 } 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "vmd", 00:05:47.200 "config": [] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "accel", 00:05:47.200 "config": [ 00:05:47.200 { 00:05:47.200 "method": "accel_set_options", 00:05:47.200 "params": { 00:05:47.200 "small_cache_size": 128, 00:05:47.200 "large_cache_size": 16, 00:05:47.200 "task_count": 2048, 00:05:47.200 "sequence_count": 2048, 00:05:47.200 "buf_count": 2048 00:05:47.200 } 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "bdev", 00:05:47.200 "config": [ 00:05:47.200 { 00:05:47.200 "method": "bdev_set_options", 00:05:47.200 "params": { 00:05:47.200 "bdev_io_pool_size": 65535, 00:05:47.200 "bdev_io_cache_size": 256, 00:05:47.200 "bdev_auto_examine": true, 00:05:47.200 "iobuf_small_cache_size": 128, 00:05:47.200 "iobuf_large_cache_size": 16 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "bdev_raid_set_options", 00:05:47.200 "params": { 00:05:47.200 "process_window_size_kb": 1024, 00:05:47.200 "process_max_bandwidth_mb_sec": 0 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "bdev_iscsi_set_options", 00:05:47.200 "params": { 00:05:47.200 "timeout_sec": 30 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "bdev_nvme_set_options", 00:05:47.200 "params": { 00:05:47.200 "action_on_timeout": "none", 00:05:47.200 "timeout_us": 0, 00:05:47.200 "timeout_admin_us": 0, 00:05:47.200 "keep_alive_timeout_ms": 10000, 00:05:47.200 "arbitration_burst": 0, 00:05:47.200 "low_priority_weight": 0, 00:05:47.200 "medium_priority_weight": 0, 00:05:47.200 "high_priority_weight": 0, 00:05:47.200 "nvme_adminq_poll_period_us": 10000, 00:05:47.200 "nvme_ioq_poll_period_us": 0, 00:05:47.200 "io_queue_requests": 0, 00:05:47.200 "delay_cmd_submit": true, 00:05:47.200 "transport_retry_count": 4, 00:05:47.200 "bdev_retry_count": 3, 00:05:47.200 "transport_ack_timeout": 0, 00:05:47.200 "ctrlr_loss_timeout_sec": 0, 00:05:47.200 "reconnect_delay_sec": 0, 00:05:47.200 "fast_io_fail_timeout_sec": 0, 00:05:47.200 "disable_auto_failback": false, 00:05:47.200 "generate_uuids": false, 00:05:47.200 "transport_tos": 0, 00:05:47.200 "nvme_error_stat": false, 00:05:47.200 "rdma_srq_size": 0, 00:05:47.200 "io_path_stat": false, 00:05:47.200 "allow_accel_sequence": false, 00:05:47.200 "rdma_max_cq_size": 0, 00:05:47.200 "rdma_cm_event_timeout_ms": 0, 00:05:47.200 "dhchap_digests": [ 00:05:47.200 "sha256", 00:05:47.200 "sha384", 00:05:47.200 "sha512" 00:05:47.200 ], 00:05:47.200 "dhchap_dhgroups": [ 00:05:47.200 "null", 00:05:47.200 "ffdhe2048", 00:05:47.200 "ffdhe3072", 00:05:47.200 "ffdhe4096", 00:05:47.200 "ffdhe6144", 00:05:47.200 "ffdhe8192" 00:05:47.200 ] 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "bdev_nvme_set_hotplug", 00:05:47.200 "params": { 00:05:47.200 "period_us": 100000, 00:05:47.200 "enable": false 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "bdev_wait_for_examine" 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "scsi", 00:05:47.200 "config": null 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "scheduler", 00:05:47.200 "config": [ 00:05:47.200 { 00:05:47.200 "method": "framework_set_scheduler", 00:05:47.200 "params": { 00:05:47.200 "name": "static" 00:05:47.200 } 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "vhost_scsi", 00:05:47.200 "config": [] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "vhost_blk", 00:05:47.200 "config": [] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "ublk", 00:05:47.200 "config": [] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "nbd", 00:05:47.200 "config": [] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "nvmf", 00:05:47.200 "config": [ 00:05:47.200 { 00:05:47.200 "method": "nvmf_set_config", 00:05:47.200 "params": { 00:05:47.200 "discovery_filter": "match_any", 00:05:47.200 "admin_cmd_passthru": { 00:05:47.200 "identify_ctrlr": false 00:05:47.200 }, 00:05:47.200 "dhchap_digests": [ 00:05:47.200 "sha256", 00:05:47.200 "sha384", 00:05:47.200 "sha512" 00:05:47.200 ], 00:05:47.200 "dhchap_dhgroups": [ 00:05:47.200 "null", 00:05:47.200 "ffdhe2048", 00:05:47.200 "ffdhe3072", 00:05:47.200 "ffdhe4096", 00:05:47.200 "ffdhe6144", 00:05:47.200 "ffdhe8192" 00:05:47.200 ] 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "nvmf_set_max_subsystems", 00:05:47.200 "params": { 00:05:47.200 "max_subsystems": 1024 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "nvmf_set_crdt", 00:05:47.200 "params": { 00:05:47.200 "crdt1": 0, 00:05:47.200 "crdt2": 0, 00:05:47.200 "crdt3": 0 00:05:47.200 } 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "method": "nvmf_create_transport", 00:05:47.200 "params": { 00:05:47.200 "trtype": "TCP", 00:05:47.200 "max_queue_depth": 128, 00:05:47.200 "max_io_qpairs_per_ctrlr": 127, 00:05:47.200 "in_capsule_data_size": 4096, 00:05:47.200 "max_io_size": 131072, 00:05:47.200 "io_unit_size": 131072, 00:05:47.200 "max_aq_depth": 128, 00:05:47.200 "num_shared_buffers": 511, 00:05:47.200 "buf_cache_size": 4294967295, 00:05:47.200 "dif_insert_or_strip": false, 00:05:47.200 "zcopy": false, 00:05:47.200 "c2h_success": true, 00:05:47.200 "sock_priority": 0, 00:05:47.200 "abort_timeout_sec": 1, 00:05:47.200 "ack_timeout": 0, 00:05:47.200 "data_wr_pool_size": 0 00:05:47.200 } 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 }, 00:05:47.200 { 00:05:47.200 "subsystem": "iscsi", 00:05:47.200 "config": [ 00:05:47.200 { 00:05:47.200 "method": "iscsi_set_options", 00:05:47.200 "params": { 00:05:47.200 "node_base": "iqn.2016-06.io.spdk", 00:05:47.200 "max_sessions": 128, 00:05:47.200 "max_connections_per_session": 2, 00:05:47.200 "max_queue_depth": 64, 00:05:47.200 "default_time2wait": 2, 00:05:47.200 "default_time2retain": 20, 00:05:47.200 "first_burst_length": 8192, 00:05:47.200 "immediate_data": true, 00:05:47.200 "allow_duplicated_isid": false, 00:05:47.200 "error_recovery_level": 0, 00:05:47.200 "nop_timeout": 60, 00:05:47.200 "nop_in_interval": 30, 00:05:47.200 "disable_chap": false, 00:05:47.200 "require_chap": false, 00:05:47.200 "mutual_chap": false, 00:05:47.200 "chap_group": 0, 00:05:47.200 "max_large_datain_per_connection": 64, 00:05:47.200 "max_r2t_per_connection": 4, 00:05:47.200 "pdu_pool_size": 36864, 00:05:47.200 "immediate_data_pool_size": 16384, 00:05:47.200 "data_out_pool_size": 2048 00:05:47.200 } 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 } 00:05:47.200 ] 00:05:47.200 } 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 717093 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 717093 ']' 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 717093 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.200 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717093 00:05:47.458 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.458 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.458 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717093' 00:05:47.458 killing process with pid 717093 00:05:47.458 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 717093 00:05:47.458 12:19:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 717093 00:05:47.717 12:19:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=717186 00:05:47.717 12:19:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:47.717 12:19:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 717186 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 717186 ']' 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 717186 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717186 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717186' 00:05:52.992 killing process with pid 717186 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 717186 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 717186 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:52.992 00:05:52.992 real 0m6.245s 00:05:52.992 user 0m5.939s 00:05:52.992 sys 0m0.589s 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.992 ************************************ 00:05:52.992 END TEST skip_rpc_with_json 00:05:52.992 ************************************ 00:05:52.992 12:19:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:52.992 12:19:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.992 12:19:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.992 12:19:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.992 ************************************ 00:05:52.992 START TEST skip_rpc_with_delay 00:05:52.992 ************************************ 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.992 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.251 [2024-11-20 12:19:58.764288] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:53.251 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:53.251 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.251 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.251 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.251 00:05:53.251 real 0m0.068s 00:05:53.251 user 0m0.038s 00:05:53.251 sys 0m0.029s 00:05:53.251 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.251 12:19:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:53.251 ************************************ 00:05:53.251 END TEST skip_rpc_with_delay 00:05:53.251 ************************************ 00:05:53.251 12:19:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:53.251 12:19:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:53.251 12:19:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:53.251 12:19:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.251 12:19:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.251 12:19:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.251 ************************************ 00:05:53.251 START TEST exit_on_failed_rpc_init 00:05:53.251 ************************************ 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=718220 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 718220 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 718220 ']' 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.251 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.252 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.252 12:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.252 [2024-11-20 12:19:58.905004] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:53.252 [2024-11-20 12:19:58.905044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718220 ] 00:05:53.252 [2024-11-20 12:19:58.976714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.510 [2024-11-20 12:19:59.016203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:54.082 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.082 [2024-11-20 12:19:59.741808] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:54.082 [2024-11-20 12:19:59.741850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718478 ] 00:05:54.082 [2024-11-20 12:19:59.813879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.342 [2024-11-20 12:19:59.851636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.342 [2024-11-20 12:19:59.851686] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:54.342 [2024-11-20 12:19:59.851711] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:54.342 [2024-11-20 12:19:59.851717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 718220 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 718220 ']' 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 718220 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718220 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718220' 00:05:54.342 killing process with pid 718220 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 718220 00:05:54.342 12:19:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 718220 00:05:54.603 00:05:54.603 real 0m1.390s 00:05:54.603 user 0m1.553s 00:05:54.603 sys 0m0.399s 00:05:54.603 12:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.603 12:20:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.603 ************************************ 00:05:54.603 END TEST exit_on_failed_rpc_init 00:05:54.603 ************************************ 00:05:54.603 12:20:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.603 00:05:54.603 real 0m13.531s 00:05:54.603 user 0m12.873s 00:05:54.603 sys 0m1.576s 00:05:54.603 12:20:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.603 12:20:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.603 ************************************ 00:05:54.603 END TEST skip_rpc 00:05:54.603 ************************************ 00:05:54.603 12:20:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:54.603 12:20:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.603 12:20:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.603 12:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.603 ************************************ 00:05:54.603 START TEST rpc_client 00:05:54.603 ************************************ 00:05:54.603 12:20:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:54.862 * Looking for test storage... 00:05:54.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:54.862 12:20:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.862 12:20:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.863 12:20:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.863 --rc genhtml_branch_coverage=1 00:05:54.863 --rc genhtml_function_coverage=1 00:05:54.863 --rc genhtml_legend=1 00:05:54.863 --rc geninfo_all_blocks=1 00:05:54.863 --rc geninfo_unexecuted_blocks=1 00:05:54.863 00:05:54.863 ' 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.863 --rc genhtml_branch_coverage=1 00:05:54.863 --rc genhtml_function_coverage=1 00:05:54.863 --rc genhtml_legend=1 00:05:54.863 --rc geninfo_all_blocks=1 00:05:54.863 --rc geninfo_unexecuted_blocks=1 00:05:54.863 00:05:54.863 ' 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.863 --rc genhtml_branch_coverage=1 00:05:54.863 --rc genhtml_function_coverage=1 00:05:54.863 --rc genhtml_legend=1 00:05:54.863 --rc geninfo_all_blocks=1 00:05:54.863 --rc geninfo_unexecuted_blocks=1 00:05:54.863 00:05:54.863 ' 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.863 --rc genhtml_branch_coverage=1 00:05:54.863 --rc genhtml_function_coverage=1 00:05:54.863 --rc genhtml_legend=1 00:05:54.863 --rc geninfo_all_blocks=1 00:05:54.863 --rc geninfo_unexecuted_blocks=1 00:05:54.863 00:05:54.863 ' 00:05:54.863 12:20:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:54.863 OK 00:05:54.863 12:20:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:54.863 00:05:54.863 real 0m0.197s 00:05:54.863 user 0m0.115s 00:05:54.863 sys 0m0.095s 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.863 12:20:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:54.863 ************************************ 00:05:54.863 END TEST rpc_client 00:05:54.863 ************************************ 00:05:54.863 12:20:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:54.863 12:20:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.863 12:20:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.863 12:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.863 ************************************ 00:05:54.863 START TEST json_config 00:05:54.863 ************************************ 00:05:54.863 12:20:00 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:55.123 12:20:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.123 12:20:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.123 12:20:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.123 12:20:00 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.123 12:20:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.123 12:20:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.123 12:20:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.123 12:20:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.123 12:20:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.123 12:20:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.123 12:20:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.123 12:20:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.123 12:20:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.123 12:20:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.123 12:20:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.123 12:20:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.123 12:20:00 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.123 12:20:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.123 12:20:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.123 12:20:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.123 12:20:00 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.123 12:20:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.123 12:20:00 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.124 12:20:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.124 12:20:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.124 12:20:00 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.124 12:20:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.124 12:20:00 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.124 12:20:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.124 12:20:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.124 12:20:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.124 12:20:00 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.124 --rc genhtml_branch_coverage=1 00:05:55.124 --rc genhtml_function_coverage=1 00:05:55.124 --rc genhtml_legend=1 00:05:55.124 --rc geninfo_all_blocks=1 00:05:55.124 --rc geninfo_unexecuted_blocks=1 00:05:55.124 00:05:55.124 ' 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.124 --rc genhtml_branch_coverage=1 00:05:55.124 --rc genhtml_function_coverage=1 00:05:55.124 --rc genhtml_legend=1 00:05:55.124 --rc geninfo_all_blocks=1 00:05:55.124 --rc geninfo_unexecuted_blocks=1 00:05:55.124 00:05:55.124 ' 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.124 --rc genhtml_branch_coverage=1 00:05:55.124 --rc genhtml_function_coverage=1 00:05:55.124 --rc genhtml_legend=1 00:05:55.124 --rc geninfo_all_blocks=1 00:05:55.124 --rc geninfo_unexecuted_blocks=1 00:05:55.124 00:05:55.124 ' 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.124 --rc genhtml_branch_coverage=1 00:05:55.124 --rc genhtml_function_coverage=1 00:05:55.124 --rc genhtml_legend=1 00:05:55.124 --rc geninfo_all_blocks=1 00:05:55.124 --rc geninfo_unexecuted_blocks=1 00:05:55.124 00:05:55.124 ' 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.124 12:20:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.124 12:20:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.124 12:20:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.124 12:20:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.124 12:20:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.124 12:20:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.124 12:20:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.124 12:20:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.124 12:20:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.124 12:20:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:55.124 INFO: JSON configuration test init 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.124 12:20:00 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:55.124 12:20:00 json_config -- json_config/common.sh@9 -- # local app=target 00:05:55.124 12:20:00 json_config -- json_config/common.sh@10 -- # shift 00:05:55.124 12:20:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.124 12:20:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.124 12:20:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.124 12:20:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.124 12:20:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.124 12:20:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=718864 00:05:55.124 12:20:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.124 Waiting for target to run... 00:05:55.124 12:20:00 json_config -- json_config/common.sh@25 -- # waitforlisten 718864 /var/tmp/spdk_tgt.sock 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 718864 ']' 00:05:55.124 12:20:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.124 12:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.124 [2024-11-20 12:20:00.871247] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:05:55.125 [2024-11-20 12:20:00.871295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718864 ] 00:05:55.692 [2024-11-20 12:20:01.300084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.692 [2024-11-20 12:20:01.358561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.951 12:20:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.951 12:20:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:55.951 12:20:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:55.951 00:05:55.951 12:20:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:55.951 12:20:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:55.951 12:20:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.951 12:20:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.951 12:20:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:55.951 12:20:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:55.951 12:20:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.951 12:20:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.211 12:20:01 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:56.211 12:20:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:56.211 12:20:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.424 12:20:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.424 12:20:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:08.424 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@54 -- # sort 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:08.424 12:20:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.424 12:20:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:08.424 12:20:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.424 12:20:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.424 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.424 MallocForNvmf0 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.424 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.424 MallocForNvmf1 00:06:08.424 12:20:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.424 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.424 [2024-11-20 12:20:13.996677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.424 12:20:14 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.424 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.424 12:20:14 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.424 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.684 12:20:14 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.684 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.942 12:20:14 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.942 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.942 [2024-11-20 12:20:14.638821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:08.942 12:20:14 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:08.942 12:20:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.942 12:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.942 12:20:14 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:08.942 12:20:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.942 12:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.200 12:20:14 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:09.200 12:20:14 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.200 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.200 MallocBdevForConfigChangeCheck 00:06:09.200 12:20:14 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:09.200 12:20:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.200 12:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.200 12:20:14 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:09.200 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.770 12:20:15 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:09.770 INFO: shutting down applications... 00:06:09.770 12:20:15 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:09.770 12:20:15 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:09.770 12:20:15 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:09.770 12:20:15 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:17.955 Calling clear_iscsi_subsystem 00:06:17.955 Calling clear_nvmf_subsystem 00:06:17.955 Calling clear_nbd_subsystem 00:06:17.955 Calling clear_ublk_subsystem 00:06:17.955 Calling clear_vhost_blk_subsystem 00:06:17.955 Calling clear_vhost_scsi_subsystem 00:06:17.955 Calling clear_bdev_subsystem 00:06:17.955 12:20:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:17.955 12:20:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:17.955 12:20:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:17.955 12:20:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:17.955 12:20:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.955 12:20:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:18.214 12:20:23 json_config -- json_config/json_config.sh@352 -- # break 00:06:18.214 12:20:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:18.214 12:20:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:18.214 12:20:23 json_config -- json_config/common.sh@31 -- # local app=target 00:06:18.214 12:20:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.214 12:20:23 json_config -- json_config/common.sh@35 -- # [[ -n 718864 ]] 00:06:18.214 12:20:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 718864 00:06:18.214 12:20:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.214 12:20:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.214 12:20:23 json_config -- json_config/common.sh@41 -- # kill -0 718864 00:06:18.214 12:20:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.781 12:20:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.781 12:20:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.781 12:20:24 json_config -- json_config/common.sh@41 -- # kill -0 718864 00:06:18.781 12:20:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:18.781 12:20:24 json_config -- json_config/common.sh@43 -- # break 00:06:18.781 12:20:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:18.781 12:20:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:18.781 SPDK target shutdown done 00:06:18.781 12:20:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:18.781 INFO: relaunching applications... 00:06:18.781 12:20:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.781 12:20:24 json_config -- json_config/common.sh@9 -- # local app=target 00:06:18.781 12:20:24 json_config -- json_config/common.sh@10 -- # shift 00:06:18.781 12:20:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.781 12:20:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.781 12:20:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.781 12:20:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.781 12:20:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.781 12:20:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=723221 00:06:18.781 12:20:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.781 Waiting for target to run... 00:06:18.781 12:20:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.781 12:20:24 json_config -- json_config/common.sh@25 -- # waitforlisten 723221 /var/tmp/spdk_tgt.sock 00:06:18.781 12:20:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 723221 ']' 00:06:18.781 12:20:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.781 12:20:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.781 12:20:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.781 12:20:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.781 12:20:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.781 [2024-11-20 12:20:24.537169] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:18.781 [2024-11-20 12:20:24.537222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723221 ] 00:06:19.350 [2024-11-20 12:20:24.971156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.350 [2024-11-20 12:20:25.025157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.573 [2024-11-20 12:20:36.506522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.573 [2024-11-20 12:20:36.538950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.573 12:20:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.573 12:20:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:31.573 12:20:37 json_config -- json_config/common.sh@26 -- # echo '' 00:06:31.573 00:06:31.573 12:20:37 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:31.573 12:20:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:31.573 INFO: Checking if target configuration is the same... 00:06:31.573 12:20:37 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.573 12:20:37 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:31.573 12:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.573 + '[' 2 -ne 2 ']' 00:06:31.573 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:31.573 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:31.573 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.573 +++ basename /dev/fd/62 00:06:31.573 ++ mktemp /tmp/62.XXX 00:06:31.573 + tmp_file_1=/tmp/62.KUL 00:06:31.573 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.573 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:31.573 + tmp_file_2=/tmp/spdk_tgt_config.json.VT3 00:06:31.573 + ret=0 00:06:31.573 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.831 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.831 + diff -u /tmp/62.KUL /tmp/spdk_tgt_config.json.VT3 00:06:31.831 + echo 'INFO: JSON config files are the same' 00:06:31.831 INFO: JSON config files are the same 00:06:31.831 + rm /tmp/62.KUL /tmp/spdk_tgt_config.json.VT3 00:06:31.831 + exit 0 00:06:31.831 12:20:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:31.831 12:20:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:31.831 INFO: changing configuration and checking if this can be detected... 00:06:31.831 12:20:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:31.832 12:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:32.091 12:20:37 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.091 12:20:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:32.091 12:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.091 + '[' 2 -ne 2 ']' 00:06:32.091 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:32.091 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:32.091 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.091 +++ basename /dev/fd/62 00:06:32.091 ++ mktemp /tmp/62.XXX 00:06:32.091 + tmp_file_1=/tmp/62.MF4 00:06:32.091 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.091 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:32.091 + tmp_file_2=/tmp/spdk_tgt_config.json.DuM 00:06:32.091 + ret=0 00:06:32.091 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:32.350 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:32.350 + diff -u /tmp/62.MF4 /tmp/spdk_tgt_config.json.DuM 00:06:32.350 + ret=1 00:06:32.350 + echo '=== Start of file: /tmp/62.MF4 ===' 00:06:32.350 + cat /tmp/62.MF4 00:06:32.350 + echo '=== End of file: /tmp/62.MF4 ===' 00:06:32.350 + echo '' 00:06:32.350 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DuM ===' 00:06:32.350 + cat /tmp/spdk_tgt_config.json.DuM 00:06:32.609 + echo '=== End of file: /tmp/spdk_tgt_config.json.DuM ===' 00:06:32.609 + echo '' 00:06:32.609 + rm /tmp/62.MF4 /tmp/spdk_tgt_config.json.DuM 00:06:32.609 + exit 1 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:32.609 INFO: configuration change detected. 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 723221 ]] 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.609 12:20:38 json_config -- json_config/json_config.sh@330 -- # killprocess 723221 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@954 -- # '[' -z 723221 ']' 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@958 -- # kill -0 723221 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@959 -- # uname 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 723221 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 723221' 00:06:32.609 killing process with pid 723221 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@973 -- # kill 723221 00:06:32.609 12:20:38 json_config -- common/autotest_common.sh@978 -- # wait 723221 00:06:35.899 12:20:41 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.899 12:20:41 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:35.899 12:20:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.899 12:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.899 12:20:41 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:35.899 12:20:41 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:35.899 INFO: Success 00:06:35.899 00:06:35.899 real 0m40.824s 00:06:35.899 user 0m35.377s 00:06:35.899 sys 0m4.753s 00:06:35.899 12:20:41 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.899 12:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.899 ************************************ 00:06:35.899 END TEST json_config 00:06:35.899 ************************************ 00:06:35.899 12:20:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.899 12:20:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.899 12:20:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.899 12:20:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.899 ************************************ 00:06:35.899 START TEST json_config_extra_key 00:06:35.899 ************************************ 00:06:35.899 12:20:41 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.899 12:20:41 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.899 12:20:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.899 12:20:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.899 12:20:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.899 12:20:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:36.160 12:20:41 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.160 12:20:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.160 --rc genhtml_branch_coverage=1 00:06:36.160 --rc genhtml_function_coverage=1 00:06:36.160 --rc genhtml_legend=1 00:06:36.160 --rc geninfo_all_blocks=1 00:06:36.160 --rc geninfo_unexecuted_blocks=1 00:06:36.160 00:06:36.160 ' 00:06:36.160 12:20:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.160 --rc genhtml_branch_coverage=1 00:06:36.160 --rc genhtml_function_coverage=1 00:06:36.160 --rc genhtml_legend=1 00:06:36.160 --rc geninfo_all_blocks=1 00:06:36.160 --rc geninfo_unexecuted_blocks=1 00:06:36.160 00:06:36.160 ' 00:06:36.160 12:20:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.160 --rc genhtml_branch_coverage=1 00:06:36.160 --rc genhtml_function_coverage=1 00:06:36.160 --rc genhtml_legend=1 00:06:36.160 --rc geninfo_all_blocks=1 00:06:36.160 --rc geninfo_unexecuted_blocks=1 00:06:36.160 00:06:36.160 ' 00:06:36.160 12:20:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.160 --rc genhtml_branch_coverage=1 00:06:36.160 --rc genhtml_function_coverage=1 00:06:36.160 --rc genhtml_legend=1 00:06:36.160 --rc geninfo_all_blocks=1 00:06:36.160 --rc geninfo_unexecuted_blocks=1 00:06:36.160 00:06:36.160 ' 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.160 12:20:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.160 12:20:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.160 12:20:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.160 12:20:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.160 12:20:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:36.160 12:20:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.160 12:20:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:36.160 INFO: launching applications... 00:06:36.160 12:20:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=726508 00:06:36.160 12:20:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:36.160 Waiting for target to run... 00:06:36.161 12:20:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 726508 /var/tmp/spdk_tgt.sock 00:06:36.161 12:20:41 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 726508 ']' 00:06:36.161 12:20:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.161 12:20:41 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.161 12:20:41 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.161 12:20:41 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.161 12:20:41 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.161 12:20:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.161 [2024-11-20 12:20:41.760771] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:36.161 [2024-11-20 12:20:41.760814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726508 ] 00:06:36.421 [2024-11-20 12:20:42.045991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.421 [2024-11-20 12:20:42.077378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.989 12:20:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.989 12:20:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:36.989 00:06:36.989 12:20:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:36.989 INFO: shutting down applications... 00:06:36.989 12:20:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 726508 ]] 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 726508 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 726508 00:06:36.989 12:20:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 726508 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.558 12:20:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.558 SPDK target shutdown done 00:06:37.558 12:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.558 Success 00:06:37.558 00:06:37.558 real 0m1.543s 00:06:37.558 user 0m1.277s 00:06:37.558 sys 0m0.404s 00:06:37.558 12:20:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.558 12:20:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 ************************************ 00:06:37.558 END TEST json_config_extra_key 00:06:37.558 ************************************ 00:06:37.558 12:20:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.558 12:20:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.558 12:20:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.559 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.559 ************************************ 00:06:37.559 START TEST alias_rpc 00:06:37.559 ************************************ 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.559 * Looking for test storage... 00:06:37.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.559 12:20:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.559 --rc genhtml_branch_coverage=1 00:06:37.559 --rc genhtml_function_coverage=1 00:06:37.559 --rc genhtml_legend=1 00:06:37.559 --rc geninfo_all_blocks=1 00:06:37.559 --rc geninfo_unexecuted_blocks=1 00:06:37.559 00:06:37.559 ' 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.559 --rc genhtml_branch_coverage=1 00:06:37.559 --rc genhtml_function_coverage=1 00:06:37.559 --rc genhtml_legend=1 00:06:37.559 --rc geninfo_all_blocks=1 00:06:37.559 --rc geninfo_unexecuted_blocks=1 00:06:37.559 00:06:37.559 ' 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.559 --rc genhtml_branch_coverage=1 00:06:37.559 --rc genhtml_function_coverage=1 00:06:37.559 --rc genhtml_legend=1 00:06:37.559 --rc geninfo_all_blocks=1 00:06:37.559 --rc geninfo_unexecuted_blocks=1 00:06:37.559 00:06:37.559 ' 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.559 --rc genhtml_branch_coverage=1 00:06:37.559 --rc genhtml_function_coverage=1 00:06:37.559 --rc genhtml_legend=1 00:06:37.559 --rc geninfo_all_blocks=1 00:06:37.559 --rc geninfo_unexecuted_blocks=1 00:06:37.559 00:06:37.559 ' 00:06:37.559 12:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.559 12:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=726832 00:06:37.559 12:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.559 12:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 726832 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 726832 ']' 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.559 12:20:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.818 [2024-11-20 12:20:43.363201] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:37.818 [2024-11-20 12:20:43.363244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726832 ] 00:06:37.818 [2024-11-20 12:20:43.432577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.818 [2024-11-20 12:20:43.472796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.077 12:20:43 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.077 12:20:43 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.077 12:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.337 12:20:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 726832 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 726832 ']' 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 726832 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726832 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726832' 00:06:38.337 killing process with pid 726832 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 726832 00:06:38.337 12:20:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 726832 00:06:38.596 00:06:38.596 real 0m1.094s 00:06:38.596 user 0m1.079s 00:06:38.596 sys 0m0.418s 00:06:38.596 12:20:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.596 12:20:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.596 ************************************ 00:06:38.596 END TEST alias_rpc 00:06:38.596 ************************************ 00:06:38.596 12:20:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:38.596 12:20:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.596 12:20:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.596 12:20:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.596 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.596 ************************************ 00:06:38.596 START TEST spdkcli_tcp 00:06:38.596 ************************************ 00:06:38.596 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.857 * Looking for test storage... 00:06:38.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.857 12:20:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.857 --rc genhtml_branch_coverage=1 00:06:38.857 --rc genhtml_function_coverage=1 00:06:38.857 --rc genhtml_legend=1 00:06:38.857 --rc geninfo_all_blocks=1 00:06:38.857 --rc geninfo_unexecuted_blocks=1 00:06:38.857 00:06:38.857 ' 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.857 --rc genhtml_branch_coverage=1 00:06:38.857 --rc genhtml_function_coverage=1 00:06:38.857 --rc genhtml_legend=1 00:06:38.857 --rc geninfo_all_blocks=1 00:06:38.857 --rc geninfo_unexecuted_blocks=1 00:06:38.857 00:06:38.857 ' 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.857 --rc genhtml_branch_coverage=1 00:06:38.857 --rc genhtml_function_coverage=1 00:06:38.857 --rc genhtml_legend=1 00:06:38.857 --rc geninfo_all_blocks=1 00:06:38.857 --rc geninfo_unexecuted_blocks=1 00:06:38.857 00:06:38.857 ' 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.857 --rc genhtml_branch_coverage=1 00:06:38.857 --rc genhtml_function_coverage=1 00:06:38.857 --rc genhtml_legend=1 00:06:38.857 --rc geninfo_all_blocks=1 00:06:38.857 --rc geninfo_unexecuted_blocks=1 00:06:38.857 00:06:38.857 ' 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=727153 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 727153 00:06:38.857 12:20:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 727153 ']' 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.857 12:20:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.857 [2024-11-20 12:20:44.536219] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:38.857 [2024-11-20 12:20:44.536261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727153 ] 00:06:38.857 [2024-11-20 12:20:44.607616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.117 [2024-11-20 12:20:44.646257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.117 [2024-11-20 12:20:44.646258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.686 12:20:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.686 12:20:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:39.686 12:20:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=727416 00:06:39.686 12:20:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:39.686 12:20:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.946 [ 00:06:39.946 "bdev_malloc_delete", 00:06:39.946 "bdev_malloc_create", 00:06:39.946 "bdev_null_resize", 00:06:39.946 "bdev_null_delete", 00:06:39.946 "bdev_null_create", 00:06:39.946 "bdev_nvme_cuse_unregister", 00:06:39.946 "bdev_nvme_cuse_register", 00:06:39.946 "bdev_opal_new_user", 00:06:39.946 "bdev_opal_set_lock_state", 00:06:39.946 "bdev_opal_delete", 00:06:39.946 "bdev_opal_get_info", 00:06:39.946 "bdev_opal_create", 00:06:39.946 "bdev_nvme_opal_revert", 00:06:39.946 "bdev_nvme_opal_init", 00:06:39.946 "bdev_nvme_send_cmd", 00:06:39.946 "bdev_nvme_set_keys", 00:06:39.946 "bdev_nvme_get_path_iostat", 00:06:39.946 "bdev_nvme_get_mdns_discovery_info", 00:06:39.946 "bdev_nvme_stop_mdns_discovery", 00:06:39.946 "bdev_nvme_start_mdns_discovery", 00:06:39.946 "bdev_nvme_set_multipath_policy", 00:06:39.946 "bdev_nvme_set_preferred_path", 00:06:39.946 "bdev_nvme_get_io_paths", 00:06:39.946 "bdev_nvme_remove_error_injection", 00:06:39.946 "bdev_nvme_add_error_injection", 00:06:39.946 "bdev_nvme_get_discovery_info", 00:06:39.946 "bdev_nvme_stop_discovery", 00:06:39.946 "bdev_nvme_start_discovery", 00:06:39.946 "bdev_nvme_get_controller_health_info", 00:06:39.946 "bdev_nvme_disable_controller", 00:06:39.946 "bdev_nvme_enable_controller", 00:06:39.946 "bdev_nvme_reset_controller", 00:06:39.946 "bdev_nvme_get_transport_statistics", 00:06:39.946 "bdev_nvme_apply_firmware", 00:06:39.946 "bdev_nvme_detach_controller", 00:06:39.946 "bdev_nvme_get_controllers", 00:06:39.946 "bdev_nvme_attach_controller", 00:06:39.946 "bdev_nvme_set_hotplug", 00:06:39.946 "bdev_nvme_set_options", 00:06:39.946 "bdev_passthru_delete", 00:06:39.946 "bdev_passthru_create", 00:06:39.946 "bdev_lvol_set_parent_bdev", 00:06:39.946 "bdev_lvol_set_parent", 00:06:39.946 "bdev_lvol_check_shallow_copy", 00:06:39.946 "bdev_lvol_start_shallow_copy", 00:06:39.946 "bdev_lvol_grow_lvstore", 00:06:39.946 "bdev_lvol_get_lvols", 00:06:39.946 "bdev_lvol_get_lvstores", 00:06:39.946 "bdev_lvol_delete", 00:06:39.946 "bdev_lvol_set_read_only", 00:06:39.946 "bdev_lvol_resize", 00:06:39.946 "bdev_lvol_decouple_parent", 00:06:39.946 "bdev_lvol_inflate", 00:06:39.946 "bdev_lvol_rename", 00:06:39.946 "bdev_lvol_clone_bdev", 00:06:39.946 "bdev_lvol_clone", 00:06:39.946 "bdev_lvol_snapshot", 00:06:39.946 "bdev_lvol_create", 00:06:39.946 "bdev_lvol_delete_lvstore", 00:06:39.946 "bdev_lvol_rename_lvstore", 00:06:39.946 "bdev_lvol_create_lvstore", 00:06:39.946 "bdev_raid_set_options", 00:06:39.946 "bdev_raid_remove_base_bdev", 00:06:39.946 "bdev_raid_add_base_bdev", 00:06:39.946 "bdev_raid_delete", 00:06:39.946 "bdev_raid_create", 00:06:39.946 "bdev_raid_get_bdevs", 00:06:39.946 "bdev_error_inject_error", 00:06:39.946 "bdev_error_delete", 00:06:39.946 "bdev_error_create", 00:06:39.946 "bdev_split_delete", 00:06:39.946 "bdev_split_create", 00:06:39.946 "bdev_delay_delete", 00:06:39.946 "bdev_delay_create", 00:06:39.946 "bdev_delay_update_latency", 00:06:39.946 "bdev_zone_block_delete", 00:06:39.946 "bdev_zone_block_create", 00:06:39.946 "blobfs_create", 00:06:39.946 "blobfs_detect", 00:06:39.946 "blobfs_set_cache_size", 00:06:39.946 "bdev_aio_delete", 00:06:39.946 "bdev_aio_rescan", 00:06:39.946 "bdev_aio_create", 00:06:39.946 "bdev_ftl_set_property", 00:06:39.946 "bdev_ftl_get_properties", 00:06:39.946 "bdev_ftl_get_stats", 00:06:39.946 "bdev_ftl_unmap", 00:06:39.946 "bdev_ftl_unload", 00:06:39.946 "bdev_ftl_delete", 00:06:39.946 "bdev_ftl_load", 00:06:39.946 "bdev_ftl_create", 00:06:39.946 "bdev_virtio_attach_controller", 00:06:39.946 "bdev_virtio_scsi_get_devices", 00:06:39.946 "bdev_virtio_detach_controller", 00:06:39.946 "bdev_virtio_blk_set_hotplug", 00:06:39.946 "bdev_iscsi_delete", 00:06:39.946 "bdev_iscsi_create", 00:06:39.946 "bdev_iscsi_set_options", 00:06:39.946 "accel_error_inject_error", 00:06:39.946 "ioat_scan_accel_module", 00:06:39.946 "dsa_scan_accel_module", 00:06:39.946 "iaa_scan_accel_module", 00:06:39.946 "vfu_virtio_create_fs_endpoint", 00:06:39.946 "vfu_virtio_create_scsi_endpoint", 00:06:39.946 "vfu_virtio_scsi_remove_target", 00:06:39.946 "vfu_virtio_scsi_add_target", 00:06:39.946 "vfu_virtio_create_blk_endpoint", 00:06:39.946 "vfu_virtio_delete_endpoint", 00:06:39.946 "keyring_file_remove_key", 00:06:39.946 "keyring_file_add_key", 00:06:39.946 "keyring_linux_set_options", 00:06:39.946 "fsdev_aio_delete", 00:06:39.946 "fsdev_aio_create", 00:06:39.946 "iscsi_get_histogram", 00:06:39.946 "iscsi_enable_histogram", 00:06:39.946 "iscsi_set_options", 00:06:39.946 "iscsi_get_auth_groups", 00:06:39.946 "iscsi_auth_group_remove_secret", 00:06:39.946 "iscsi_auth_group_add_secret", 00:06:39.946 "iscsi_delete_auth_group", 00:06:39.946 "iscsi_create_auth_group", 00:06:39.946 "iscsi_set_discovery_auth", 00:06:39.946 "iscsi_get_options", 00:06:39.946 "iscsi_target_node_request_logout", 00:06:39.946 "iscsi_target_node_set_redirect", 00:06:39.946 "iscsi_target_node_set_auth", 00:06:39.946 "iscsi_target_node_add_lun", 00:06:39.946 "iscsi_get_stats", 00:06:39.946 "iscsi_get_connections", 00:06:39.946 "iscsi_portal_group_set_auth", 00:06:39.946 "iscsi_start_portal_group", 00:06:39.946 "iscsi_delete_portal_group", 00:06:39.946 "iscsi_create_portal_group", 00:06:39.946 "iscsi_get_portal_groups", 00:06:39.946 "iscsi_delete_target_node", 00:06:39.946 "iscsi_target_node_remove_pg_ig_maps", 00:06:39.946 "iscsi_target_node_add_pg_ig_maps", 00:06:39.946 "iscsi_create_target_node", 00:06:39.946 "iscsi_get_target_nodes", 00:06:39.946 "iscsi_delete_initiator_group", 00:06:39.946 "iscsi_initiator_group_remove_initiators", 00:06:39.946 "iscsi_initiator_group_add_initiators", 00:06:39.946 "iscsi_create_initiator_group", 00:06:39.946 "iscsi_get_initiator_groups", 00:06:39.946 "nvmf_set_crdt", 00:06:39.946 "nvmf_set_config", 00:06:39.946 "nvmf_set_max_subsystems", 00:06:39.946 "nvmf_stop_mdns_prr", 00:06:39.946 "nvmf_publish_mdns_prr", 00:06:39.946 "nvmf_subsystem_get_listeners", 00:06:39.946 "nvmf_subsystem_get_qpairs", 00:06:39.946 "nvmf_subsystem_get_controllers", 00:06:39.946 "nvmf_get_stats", 00:06:39.946 "nvmf_get_transports", 00:06:39.946 "nvmf_create_transport", 00:06:39.946 "nvmf_get_targets", 00:06:39.946 "nvmf_delete_target", 00:06:39.946 "nvmf_create_target", 00:06:39.946 "nvmf_subsystem_allow_any_host", 00:06:39.946 "nvmf_subsystem_set_keys", 00:06:39.946 "nvmf_subsystem_remove_host", 00:06:39.946 "nvmf_subsystem_add_host", 00:06:39.946 "nvmf_ns_remove_host", 00:06:39.946 "nvmf_ns_add_host", 00:06:39.946 "nvmf_subsystem_remove_ns", 00:06:39.946 "nvmf_subsystem_set_ns_ana_group", 00:06:39.946 "nvmf_subsystem_add_ns", 00:06:39.946 "nvmf_subsystem_listener_set_ana_state", 00:06:39.946 "nvmf_discovery_get_referrals", 00:06:39.946 "nvmf_discovery_remove_referral", 00:06:39.946 "nvmf_discovery_add_referral", 00:06:39.946 "nvmf_subsystem_remove_listener", 00:06:39.946 "nvmf_subsystem_add_listener", 00:06:39.946 "nvmf_delete_subsystem", 00:06:39.946 "nvmf_create_subsystem", 00:06:39.946 "nvmf_get_subsystems", 00:06:39.946 "env_dpdk_get_mem_stats", 00:06:39.946 "nbd_get_disks", 00:06:39.946 "nbd_stop_disk", 00:06:39.946 "nbd_start_disk", 00:06:39.946 "ublk_recover_disk", 00:06:39.946 "ublk_get_disks", 00:06:39.946 "ublk_stop_disk", 00:06:39.946 "ublk_start_disk", 00:06:39.946 "ublk_destroy_target", 00:06:39.946 "ublk_create_target", 00:06:39.946 "virtio_blk_create_transport", 00:06:39.946 "virtio_blk_get_transports", 00:06:39.946 "vhost_controller_set_coalescing", 00:06:39.946 "vhost_get_controllers", 00:06:39.947 "vhost_delete_controller", 00:06:39.947 "vhost_create_blk_controller", 00:06:39.947 "vhost_scsi_controller_remove_target", 00:06:39.947 "vhost_scsi_controller_add_target", 00:06:39.947 "vhost_start_scsi_controller", 00:06:39.947 "vhost_create_scsi_controller", 00:06:39.947 "thread_set_cpumask", 00:06:39.947 "scheduler_set_options", 00:06:39.947 "framework_get_governor", 00:06:39.947 "framework_get_scheduler", 00:06:39.947 "framework_set_scheduler", 00:06:39.947 "framework_get_reactors", 00:06:39.947 "thread_get_io_channels", 00:06:39.947 "thread_get_pollers", 00:06:39.947 "thread_get_stats", 00:06:39.947 "framework_monitor_context_switch", 00:06:39.947 "spdk_kill_instance", 00:06:39.947 "log_enable_timestamps", 00:06:39.947 "log_get_flags", 00:06:39.947 "log_clear_flag", 00:06:39.947 "log_set_flag", 00:06:39.947 "log_get_level", 00:06:39.947 "log_set_level", 00:06:39.947 "log_get_print_level", 00:06:39.947 "log_set_print_level", 00:06:39.947 "framework_enable_cpumask_locks", 00:06:39.947 "framework_disable_cpumask_locks", 00:06:39.947 "framework_wait_init", 00:06:39.947 "framework_start_init", 00:06:39.947 "scsi_get_devices", 00:06:39.947 "bdev_get_histogram", 00:06:39.947 "bdev_enable_histogram", 00:06:39.947 "bdev_set_qos_limit", 00:06:39.947 "bdev_set_qd_sampling_period", 00:06:39.947 "bdev_get_bdevs", 00:06:39.947 "bdev_reset_iostat", 00:06:39.947 "bdev_get_iostat", 00:06:39.947 "bdev_examine", 00:06:39.947 "bdev_wait_for_examine", 00:06:39.947 "bdev_set_options", 00:06:39.947 "accel_get_stats", 00:06:39.947 "accel_set_options", 00:06:39.947 "accel_set_driver", 00:06:39.947 "accel_crypto_key_destroy", 00:06:39.947 "accel_crypto_keys_get", 00:06:39.947 "accel_crypto_key_create", 00:06:39.947 "accel_assign_opc", 00:06:39.947 "accel_get_module_info", 00:06:39.947 "accel_get_opc_assignments", 00:06:39.947 "vmd_rescan", 00:06:39.947 "vmd_remove_device", 00:06:39.947 "vmd_enable", 00:06:39.947 "sock_get_default_impl", 00:06:39.947 "sock_set_default_impl", 00:06:39.947 "sock_impl_set_options", 00:06:39.947 "sock_impl_get_options", 00:06:39.947 "iobuf_get_stats", 00:06:39.947 "iobuf_set_options", 00:06:39.947 "keyring_get_keys", 00:06:39.947 "vfu_tgt_set_base_path", 00:06:39.947 "framework_get_pci_devices", 00:06:39.947 "framework_get_config", 00:06:39.947 "framework_get_subsystems", 00:06:39.947 "fsdev_set_opts", 00:06:39.947 "fsdev_get_opts", 00:06:39.947 "trace_get_info", 00:06:39.947 "trace_get_tpoint_group_mask", 00:06:39.947 "trace_disable_tpoint_group", 00:06:39.947 "trace_enable_tpoint_group", 00:06:39.947 "trace_clear_tpoint_mask", 00:06:39.947 "trace_set_tpoint_mask", 00:06:39.947 "notify_get_notifications", 00:06:39.947 "notify_get_types", 00:06:39.947 "spdk_get_version", 00:06:39.947 "rpc_get_methods" 00:06:39.947 ] 00:06:39.947 12:20:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.947 12:20:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:39.947 12:20:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 727153 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 727153 ']' 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 727153 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727153 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727153' 00:06:39.947 killing process with pid 727153 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 727153 00:06:39.947 12:20:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 727153 00:06:40.206 00:06:40.206 real 0m1.616s 00:06:40.206 user 0m3.020s 00:06:40.206 sys 0m0.456s 00:06:40.206 12:20:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.206 12:20:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.206 ************************************ 00:06:40.206 END TEST spdkcli_tcp 00:06:40.206 ************************************ 00:06:40.206 12:20:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.206 12:20:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.206 12:20:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.206 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:06:40.466 ************************************ 00:06:40.466 START TEST dpdk_mem_utility 00:06:40.466 ************************************ 00:06:40.466 12:20:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.466 * Looking for test storage... 00:06:40.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.466 12:20:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.466 --rc genhtml_branch_coverage=1 00:06:40.466 --rc genhtml_function_coverage=1 00:06:40.466 --rc genhtml_legend=1 00:06:40.466 --rc geninfo_all_blocks=1 00:06:40.466 --rc geninfo_unexecuted_blocks=1 00:06:40.466 00:06:40.466 ' 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.466 --rc genhtml_branch_coverage=1 00:06:40.466 --rc genhtml_function_coverage=1 00:06:40.466 --rc genhtml_legend=1 00:06:40.466 --rc geninfo_all_blocks=1 00:06:40.466 --rc geninfo_unexecuted_blocks=1 00:06:40.466 00:06:40.466 ' 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.466 --rc genhtml_branch_coverage=1 00:06:40.466 --rc genhtml_function_coverage=1 00:06:40.466 --rc genhtml_legend=1 00:06:40.466 --rc geninfo_all_blocks=1 00:06:40.466 --rc geninfo_unexecuted_blocks=1 00:06:40.466 00:06:40.466 ' 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.466 --rc genhtml_branch_coverage=1 00:06:40.466 --rc genhtml_function_coverage=1 00:06:40.466 --rc genhtml_legend=1 00:06:40.466 --rc geninfo_all_blocks=1 00:06:40.466 --rc geninfo_unexecuted_blocks=1 00:06:40.466 00:06:40.466 ' 00:06:40.466 12:20:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.466 12:20:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=727503 00:06:40.466 12:20:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.466 12:20:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 727503 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 727503 ']' 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.466 12:20:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.466 [2024-11-20 12:20:46.217170] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:40.467 [2024-11-20 12:20:46.217214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727503 ] 00:06:40.726 [2024-11-20 12:20:46.286461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.726 [2024-11-20 12:20:46.323481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.295 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.295 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:41.295 12:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.295 12:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.295 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.295 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.295 { 00:06:41.295 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.295 } 00:06:41.295 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.295 12:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.295 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:41.295 1 heaps totaling size 810.000000 MiB 00:06:41.295 size: 810.000000 MiB heap id: 0 00:06:41.295 end heaps---------- 00:06:41.295 9 mempools totaling size 595.772034 MiB 00:06:41.295 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.295 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.295 size: 92.545471 MiB name: bdev_io_727503 00:06:41.295 size: 50.003479 MiB name: msgpool_727503 00:06:41.295 size: 36.509338 MiB name: fsdev_io_727503 00:06:41.295 size: 21.763794 MiB name: PDU_Pool 00:06:41.295 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.295 size: 4.133484 MiB name: evtpool_727503 00:06:41.295 size: 0.026123 MiB name: Session_Pool 00:06:41.295 end mempools------- 00:06:41.295 6 memzones totaling size 4.142822 MiB 00:06:41.295 size: 1.000366 MiB name: RG_ring_0_727503 00:06:41.295 size: 1.000366 MiB name: RG_ring_1_727503 00:06:41.295 size: 1.000366 MiB name: RG_ring_4_727503 00:06:41.295 size: 1.000366 MiB name: RG_ring_5_727503 00:06:41.295 size: 0.125366 MiB name: RG_ring_2_727503 00:06:41.295 size: 0.015991 MiB name: RG_ring_3_727503 00:06:41.295 end memzones------- 00:06:41.555 12:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.555 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:41.555 list of free elements. size: 10.862488 MiB 00:06:41.555 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:41.555 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:41.555 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:41.555 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:41.555 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:41.555 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:41.555 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:41.555 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:41.555 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:41.555 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:41.555 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:41.555 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:41.555 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:41.555 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:41.555 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:41.555 list of standard malloc elements. size: 199.218628 MiB 00:06:41.555 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:41.555 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:41.555 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:41.555 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:41.555 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:41.555 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.555 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:41.555 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.555 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:41.555 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:41.555 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:41.555 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:41.555 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:41.555 list of memzone associated elements. size: 599.918884 MiB 00:06:41.555 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:41.555 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.555 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:41.555 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.555 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:41.556 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_727503_0 00:06:41.556 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:41.556 associated memzone info: size: 48.002930 MiB name: MP_msgpool_727503_0 00:06:41.556 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:41.556 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_727503_0 00:06:41.556 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:41.556 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.556 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:41.556 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.556 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:41.556 associated memzone info: size: 3.000122 MiB name: MP_evtpool_727503_0 00:06:41.556 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:41.556 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_727503 00:06:41.556 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.556 associated memzone info: size: 1.007996 MiB name: MP_evtpool_727503 00:06:41.556 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:41.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.556 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:41.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.556 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:41.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.556 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:41.556 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.556 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:41.556 associated memzone info: size: 1.000366 MiB name: RG_ring_0_727503 00:06:41.556 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:41.556 associated memzone info: size: 1.000366 MiB name: RG_ring_1_727503 00:06:41.556 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:41.556 associated memzone info: size: 1.000366 MiB name: RG_ring_4_727503 00:06:41.556 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:41.556 associated memzone info: size: 1.000366 MiB name: RG_ring_5_727503 00:06:41.556 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:41.556 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_727503 00:06:41.556 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:41.556 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_727503 00:06:41.556 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:41.556 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.556 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:41.556 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.556 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:41.556 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.556 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:41.556 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_727503 00:06:41.556 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:41.556 associated memzone info: size: 0.125366 MiB name: RG_ring_2_727503 00:06:41.556 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:41.556 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.556 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:41.556 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.556 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:41.556 associated memzone info: size: 0.015991 MiB name: RG_ring_3_727503 00:06:41.556 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:41.556 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.556 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:41.556 associated memzone info: size: 0.000183 MiB name: MP_msgpool_727503 00:06:41.556 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:41.556 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_727503 00:06:41.556 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:41.556 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_727503 00:06:41.556 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:41.556 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.556 12:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.556 12:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 727503 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 727503 ']' 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 727503 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727503 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727503' 00:06:41.556 killing process with pid 727503 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 727503 00:06:41.556 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 727503 00:06:41.816 00:06:41.816 real 0m1.442s 00:06:41.816 user 0m1.484s 00:06:41.816 sys 0m0.419s 00:06:41.816 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.816 12:20:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.816 ************************************ 00:06:41.816 END TEST dpdk_mem_utility 00:06:41.816 ************************************ 00:06:41.816 12:20:47 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.816 12:20:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.816 12:20:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.816 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.816 ************************************ 00:06:41.816 START TEST event 00:06:41.816 ************************************ 00:06:41.816 12:20:47 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.075 * Looking for test storage... 00:06:42.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.075 12:20:47 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.075 12:20:47 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.075 12:20:47 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.075 12:20:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.075 12:20:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.075 12:20:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.075 12:20:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.075 12:20:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.075 12:20:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.075 12:20:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.075 12:20:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.075 12:20:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.075 12:20:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.075 12:20:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.075 12:20:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.075 12:20:47 event -- scripts/common.sh@344 -- # case "$op" in 00:06:42.075 12:20:47 event -- scripts/common.sh@345 -- # : 1 00:06:42.075 12:20:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.075 12:20:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.075 12:20:47 event -- scripts/common.sh@365 -- # decimal 1 00:06:42.075 12:20:47 event -- scripts/common.sh@353 -- # local d=1 00:06:42.075 12:20:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.075 12:20:47 event -- scripts/common.sh@355 -- # echo 1 00:06:42.075 12:20:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.075 12:20:47 event -- scripts/common.sh@366 -- # decimal 2 00:06:42.075 12:20:47 event -- scripts/common.sh@353 -- # local d=2 00:06:42.075 12:20:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.075 12:20:47 event -- scripts/common.sh@355 -- # echo 2 00:06:42.075 12:20:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.075 12:20:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.075 12:20:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.075 12:20:47 event -- scripts/common.sh@368 -- # return 0 00:06:42.075 12:20:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.075 12:20:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.075 --rc genhtml_branch_coverage=1 00:06:42.075 --rc genhtml_function_coverage=1 00:06:42.075 --rc genhtml_legend=1 00:06:42.075 --rc geninfo_all_blocks=1 00:06:42.075 --rc geninfo_unexecuted_blocks=1 00:06:42.075 00:06:42.076 ' 00:06:42.076 12:20:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.076 --rc genhtml_branch_coverage=1 00:06:42.076 --rc genhtml_function_coverage=1 00:06:42.076 --rc genhtml_legend=1 00:06:42.076 --rc geninfo_all_blocks=1 00:06:42.076 --rc geninfo_unexecuted_blocks=1 00:06:42.076 00:06:42.076 ' 00:06:42.076 12:20:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.076 --rc genhtml_branch_coverage=1 00:06:42.076 --rc genhtml_function_coverage=1 00:06:42.076 --rc genhtml_legend=1 00:06:42.076 --rc geninfo_all_blocks=1 00:06:42.076 --rc geninfo_unexecuted_blocks=1 00:06:42.076 00:06:42.076 ' 00:06:42.076 12:20:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.076 --rc genhtml_branch_coverage=1 00:06:42.076 --rc genhtml_function_coverage=1 00:06:42.076 --rc genhtml_legend=1 00:06:42.076 --rc geninfo_all_blocks=1 00:06:42.076 --rc geninfo_unexecuted_blocks=1 00:06:42.076 00:06:42.076 ' 00:06:42.076 12:20:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.076 12:20:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.076 12:20:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.076 12:20:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:42.076 12:20:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.076 12:20:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.076 ************************************ 00:06:42.076 START TEST event_perf 00:06:42.076 ************************************ 00:06:42.076 12:20:47 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.076 Running I/O for 1 seconds...[2024-11-20 12:20:47.733222] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:42.076 [2024-11-20 12:20:47.733278] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727830 ] 00:06:42.076 [2024-11-20 12:20:47.808191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.335 [2024-11-20 12:20:47.849912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.335 [2024-11-20 12:20:47.850026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.335 [2024-11-20 12:20:47.850139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.335 [2024-11-20 12:20:47.850139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.273 Running I/O for 1 seconds... 00:06:43.273 lcore 0: 215293 00:06:43.273 lcore 1: 215293 00:06:43.273 lcore 2: 215292 00:06:43.273 lcore 3: 215292 00:06:43.273 done. 00:06:43.273 00:06:43.273 real 0m1.174s 00:06:43.273 user 0m4.098s 00:06:43.273 sys 0m0.073s 00:06:43.273 12:20:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.273 12:20:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.273 ************************************ 00:06:43.273 END TEST event_perf 00:06:43.273 ************************************ 00:06:43.273 12:20:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.273 12:20:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:43.273 12:20:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.273 12:20:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.273 ************************************ 00:06:43.273 START TEST event_reactor 00:06:43.273 ************************************ 00:06:43.273 12:20:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.273 [2024-11-20 12:20:48.974005] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:43.273 [2024-11-20 12:20:48.974077] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728111 ] 00:06:43.538 [2024-11-20 12:20:49.051355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.538 [2024-11-20 12:20:49.088667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.477 test_start 00:06:44.477 oneshot 00:06:44.477 tick 100 00:06:44.477 tick 100 00:06:44.477 tick 250 00:06:44.477 tick 100 00:06:44.477 tick 100 00:06:44.477 tick 100 00:06:44.477 tick 250 00:06:44.477 tick 500 00:06:44.477 tick 100 00:06:44.477 tick 100 00:06:44.477 tick 250 00:06:44.477 tick 100 00:06:44.477 tick 100 00:06:44.477 test_end 00:06:44.477 00:06:44.477 real 0m1.176s 00:06:44.477 user 0m1.087s 00:06:44.477 sys 0m0.085s 00:06:44.477 12:20:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.477 12:20:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.477 ************************************ 00:06:44.477 END TEST event_reactor 00:06:44.477 ************************************ 00:06:44.477 12:20:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.477 12:20:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:44.477 12:20:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.477 12:20:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.477 ************************************ 00:06:44.477 START TEST event_reactor_perf 00:06:44.477 ************************************ 00:06:44.477 12:20:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.477 [2024-11-20 12:20:50.223024] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:44.477 [2024-11-20 12:20:50.223086] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728397 ] 00:06:44.736 [2024-11-20 12:20:50.300420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.736 [2024-11-20 12:20:50.339196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.674 test_start 00:06:45.674 test_end 00:06:45.674 Performance: 548067 events per second 00:06:45.674 00:06:45.674 real 0m1.175s 00:06:45.674 user 0m1.095s 00:06:45.674 sys 0m0.075s 00:06:45.674 12:20:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.674 12:20:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.674 ************************************ 00:06:45.674 END TEST event_reactor_perf 00:06:45.674 ************************************ 00:06:45.674 12:20:51 event -- event/event.sh@49 -- # uname -s 00:06:45.674 12:20:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.674 12:20:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.674 12:20:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.674 12:20:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.674 12:20:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.934 ************************************ 00:06:45.934 START TEST event_scheduler 00:06:45.934 ************************************ 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.934 * Looking for test storage... 00:06:45.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.934 12:20:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.934 --rc genhtml_branch_coverage=1 00:06:45.934 --rc genhtml_function_coverage=1 00:06:45.934 --rc genhtml_legend=1 00:06:45.934 --rc geninfo_all_blocks=1 00:06:45.934 --rc geninfo_unexecuted_blocks=1 00:06:45.934 00:06:45.934 ' 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.934 --rc genhtml_branch_coverage=1 00:06:45.934 --rc genhtml_function_coverage=1 00:06:45.934 --rc genhtml_legend=1 00:06:45.934 --rc geninfo_all_blocks=1 00:06:45.934 --rc geninfo_unexecuted_blocks=1 00:06:45.934 00:06:45.934 ' 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.934 --rc genhtml_branch_coverage=1 00:06:45.934 --rc genhtml_function_coverage=1 00:06:45.934 --rc genhtml_legend=1 00:06:45.934 --rc geninfo_all_blocks=1 00:06:45.934 --rc geninfo_unexecuted_blocks=1 00:06:45.934 00:06:45.934 ' 00:06:45.934 12:20:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.934 --rc genhtml_branch_coverage=1 00:06:45.934 --rc genhtml_function_coverage=1 00:06:45.934 --rc genhtml_legend=1 00:06:45.934 --rc geninfo_all_blocks=1 00:06:45.934 --rc geninfo_unexecuted_blocks=1 00:06:45.934 00:06:45.934 ' 00:06:45.934 12:20:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.934 12:20:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=728712 00:06:45.934 12:20:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.935 12:20:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.935 12:20:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 728712 00:06:45.935 12:20:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 728712 ']' 00:06:45.935 12:20:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.935 12:20:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.935 12:20:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.935 12:20:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.935 12:20:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.935 [2024-11-20 12:20:51.675670] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:45.935 [2024-11-20 12:20:51.675714] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728712 ] 00:06:46.194 [2024-11-20 12:20:51.746874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.194 [2024-11-20 12:20:51.787553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.194 [2024-11-20 12:20:51.787668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.194 [2024-11-20 12:20:51.787777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.194 [2024-11-20 12:20:51.787777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:46.763 12:20:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.763 [2024-11-20 12:20:52.502139] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:46.763 [2024-11-20 12:20:52.502158] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.763 [2024-11-20 12:20:52.502167] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.763 [2024-11-20 12:20:52.502172] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.763 [2024-11-20 12:20:52.502177] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.763 12:20:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.763 12:20:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 [2024-11-20 12:20:52.574706] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:47.022 12:20:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:47.022 12:20:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.022 12:20:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 ************************************ 00:06:47.022 START TEST scheduler_create_thread 00:06:47.022 ************************************ 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 2 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 3 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 4 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 5 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 6 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 7 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 8 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 9 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 10 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.022 12:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.591 12:20:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.591 12:20:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.591 12:20:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.591 12:20:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.971 12:20:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.971 12:20:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.971 12:20:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.971 12:20:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.971 12:20:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.350 12:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.350 00:06:50.350 real 0m3.100s 00:06:50.350 user 0m0.023s 00:06:50.350 sys 0m0.007s 00:06:50.350 12:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.350 12:20:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.350 ************************************ 00:06:50.350 END TEST scheduler_create_thread 00:06:50.350 ************************************ 00:06:50.350 12:20:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:50.350 12:20:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 728712 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 728712 ']' 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 728712 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 728712 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 728712' 00:06:50.350 killing process with pid 728712 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 728712 00:06:50.350 12:20:55 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 728712 00:06:50.350 [2024-11-20 12:20:56.089802] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.610 00:06:50.610 real 0m4.820s 00:06:50.610 user 0m9.428s 00:06:50.610 sys 0m0.418s 00:06:50.610 12:20:56 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.610 12:20:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.610 ************************************ 00:06:50.610 END TEST event_scheduler 00:06:50.610 ************************************ 00:06:50.610 12:20:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.610 12:20:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.610 12:20:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.610 12:20:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.610 12:20:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.610 ************************************ 00:06:50.610 START TEST app_repeat 00:06:50.610 ************************************ 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=729565 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 729565' 00:06:50.610 Process app_repeat pid: 729565 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.610 spdk_app_start Round 0 00:06:50.610 12:20:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 729565 /var/tmp/spdk-nbd.sock 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 729565 ']' 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.610 12:20:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.868 [2024-11-20 12:20:56.384369] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:06:50.868 [2024-11-20 12:20:56.384433] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729565 ] 00:06:50.868 [2024-11-20 12:20:56.462084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.868 [2024-11-20 12:20:56.500801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.868 [2024-11-20 12:20:56.500801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.868 12:20:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.868 12:20:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:50.868 12:20:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.127 Malloc0 00:06:51.127 12:20:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.386 Malloc1 00:06:51.386 12:20:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.386 12:20:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.646 /dev/nbd0 00:06:51.646 12:20:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.646 12:20:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.646 1+0 records in 00:06:51.646 1+0 records out 00:06:51.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188227 s, 21.8 MB/s 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.646 12:20:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:51.646 12:20:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.646 12:20:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.646 12:20:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.646 /dev/nbd1 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.905 1+0 records in 00:06:51.905 1+0 records out 00:06:51.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178587 s, 22.9 MB/s 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.905 12:20:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.905 { 00:06:51.905 "nbd_device": "/dev/nbd0", 00:06:51.905 "bdev_name": "Malloc0" 00:06:51.905 }, 00:06:51.905 { 00:06:51.905 "nbd_device": "/dev/nbd1", 00:06:51.905 "bdev_name": "Malloc1" 00:06:51.905 } 00:06:51.905 ]' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.905 { 00:06:51.905 "nbd_device": "/dev/nbd0", 00:06:51.905 "bdev_name": "Malloc0" 00:06:51.905 }, 00:06:51.905 { 00:06:51.905 "nbd_device": "/dev/nbd1", 00:06:51.905 "bdev_name": "Malloc1" 00:06:51.905 } 00:06:51.905 ]' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.905 /dev/nbd1' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.905 /dev/nbd1' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.905 12:20:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.164 256+0 records in 00:06:52.164 256+0 records out 00:06:52.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106675 s, 98.3 MB/s 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.164 256+0 records in 00:06:52.164 256+0 records out 00:06:52.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129623 s, 80.9 MB/s 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.164 256+0 records in 00:06:52.164 256+0 records out 00:06:52.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139185 s, 75.3 MB/s 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.164 12:20:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.424 12:20:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.424 12:20:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.683 12:20:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.684 12:20:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.684 12:20:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.684 12:20:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.684 12:20:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.684 12:20:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.943 12:20:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.203 [2024-11-20 12:20:58.712851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.203 [2024-11-20 12:20:58.746703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.203 [2024-11-20 12:20:58.746704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.203 [2024-11-20 12:20:58.786579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.203 [2024-11-20 12:20:58.786618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.492 12:21:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.492 12:21:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:56.492 spdk_app_start Round 1 00:06:56.492 12:21:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 729565 /var/tmp/spdk-nbd.sock 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 729565 ']' 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.492 12:21:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:56.492 12:21:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.492 Malloc0 00:06:56.492 12:21:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.492 Malloc1 00:06:56.492 12:21:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.492 12:21:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.751 /dev/nbd0 00:06:56.751 12:21:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.751 12:21:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.752 1+0 records in 00:06:56.752 1+0 records out 00:06:56.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201625 s, 20.3 MB/s 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.752 12:21:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.752 12:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.752 12:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.752 12:21:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.011 /dev/nbd1 00:06:57.011 12:21:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.011 12:21:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.011 12:21:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.011 12:21:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:57.011 12:21:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.011 12:21:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.012 1+0 records in 00:06:57.012 1+0 records out 00:06:57.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206096 s, 19.9 MB/s 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.012 12:21:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:57.012 12:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.012 12:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.012 12:21:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.012 12:21:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.012 12:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.274 { 00:06:57.274 "nbd_device": "/dev/nbd0", 00:06:57.274 "bdev_name": "Malloc0" 00:06:57.274 }, 00:06:57.274 { 00:06:57.274 "nbd_device": "/dev/nbd1", 00:06:57.274 "bdev_name": "Malloc1" 00:06:57.274 } 00:06:57.274 ]' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.274 { 00:06:57.274 "nbd_device": "/dev/nbd0", 00:06:57.274 "bdev_name": "Malloc0" 00:06:57.274 }, 00:06:57.274 { 00:06:57.274 "nbd_device": "/dev/nbd1", 00:06:57.274 "bdev_name": "Malloc1" 00:06:57.274 } 00:06:57.274 ]' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.274 /dev/nbd1' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.274 /dev/nbd1' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.274 256+0 records in 00:06:57.274 256+0 records out 00:06:57.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010671 s, 98.3 MB/s 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.274 256+0 records in 00:06:57.274 256+0 records out 00:06:57.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126777 s, 82.7 MB/s 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.274 256+0 records in 00:06:57.274 256+0 records out 00:06:57.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139365 s, 75.2 MB/s 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.274 12:21:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.606 12:21:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.865 12:21:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.865 12:21:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.866 12:21:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.866 12:21:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.125 12:21:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.384 [2024-11-20 12:21:03.941407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.384 [2024-11-20 12:21:03.978091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.384 [2024-11-20 12:21:03.978094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.384 [2024-11-20 12:21:04.018901] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.384 [2024-11-20 12:21:04.018937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.676 12:21:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.676 12:21:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:01.676 spdk_app_start Round 2 00:07:01.676 12:21:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 729565 /var/tmp/spdk-nbd.sock 00:07:01.676 12:21:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 729565 ']' 00:07:01.676 12:21:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.676 12:21:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.676 12:21:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.677 12:21:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.677 12:21:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.677 12:21:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.677 12:21:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:01.677 12:21:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.677 Malloc0 00:07:01.677 12:21:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.677 Malloc1 00:07:01.677 12:21:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.677 12:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.936 /dev/nbd0 00:07:01.936 12:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.936 12:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.936 1+0 records in 00:07:01.936 1+0 records out 00:07:01.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253156 s, 16.2 MB/s 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.936 12:21:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.936 12:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.936 12:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.936 12:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.195 /dev/nbd1 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.195 1+0 records in 00:07:02.195 1+0 records out 00:07:02.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000121122 s, 33.8 MB/s 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.195 12:21:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.195 12:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.454 { 00:07:02.454 "nbd_device": "/dev/nbd0", 00:07:02.454 "bdev_name": "Malloc0" 00:07:02.454 }, 00:07:02.454 { 00:07:02.454 "nbd_device": "/dev/nbd1", 00:07:02.454 "bdev_name": "Malloc1" 00:07:02.454 } 00:07:02.454 ]' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.454 { 00:07:02.454 "nbd_device": "/dev/nbd0", 00:07:02.454 "bdev_name": "Malloc0" 00:07:02.454 }, 00:07:02.454 { 00:07:02.454 "nbd_device": "/dev/nbd1", 00:07:02.454 "bdev_name": "Malloc1" 00:07:02.454 } 00:07:02.454 ]' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.454 /dev/nbd1' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.454 /dev/nbd1' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.454 256+0 records in 00:07:02.454 256+0 records out 00:07:02.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106893 s, 98.1 MB/s 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.454 256+0 records in 00:07:02.454 256+0 records out 00:07:02.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122315 s, 85.7 MB/s 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.454 256+0 records in 00:07:02.454 256+0 records out 00:07:02.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131697 s, 79.6 MB/s 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.454 12:21:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.713 12:21:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.970 12:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.228 12:21:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.228 12:21:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.488 12:21:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.488 [2024-11-20 12:21:09.175047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.488 [2024-11-20 12:21:09.209177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.488 [2024-11-20 12:21:09.209177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.488 [2024-11-20 12:21:09.249152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.488 [2024-11-20 12:21:09.249212] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.778 12:21:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 729565 /var/tmp/spdk-nbd.sock 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 729565 ']' 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.778 12:21:12 event.app_repeat -- event/event.sh@39 -- # killprocess 729565 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 729565 ']' 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 729565 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729565 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729565' 00:07:06.778 killing process with pid 729565 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 729565 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 729565 00:07:06.778 spdk_app_start is called in Round 0. 00:07:06.778 Shutdown signal received, stop current app iteration 00:07:06.778 Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 reinitialization... 00:07:06.778 spdk_app_start is called in Round 1. 00:07:06.778 Shutdown signal received, stop current app iteration 00:07:06.778 Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 reinitialization... 00:07:06.778 spdk_app_start is called in Round 2. 00:07:06.778 Shutdown signal received, stop current app iteration 00:07:06.778 Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 reinitialization... 00:07:06.778 spdk_app_start is called in Round 3. 00:07:06.778 Shutdown signal received, stop current app iteration 00:07:06.778 12:21:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:06.778 12:21:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:06.778 00:07:06.778 real 0m16.075s 00:07:06.778 user 0m35.199s 00:07:06.778 sys 0m2.442s 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.778 12:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.778 ************************************ 00:07:06.778 END TEST app_repeat 00:07:06.778 ************************************ 00:07:06.778 12:21:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:06.778 12:21:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.778 12:21:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.778 12:21:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.778 12:21:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.778 ************************************ 00:07:06.778 START TEST cpu_locks 00:07:06.778 ************************************ 00:07:06.778 12:21:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.037 * Looking for test storage... 00:07:07.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.038 12:21:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.038 --rc genhtml_branch_coverage=1 00:07:07.038 --rc genhtml_function_coverage=1 00:07:07.038 --rc genhtml_legend=1 00:07:07.038 --rc geninfo_all_blocks=1 00:07:07.038 --rc geninfo_unexecuted_blocks=1 00:07:07.038 00:07:07.038 ' 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.038 --rc genhtml_branch_coverage=1 00:07:07.038 --rc genhtml_function_coverage=1 00:07:07.038 --rc genhtml_legend=1 00:07:07.038 --rc geninfo_all_blocks=1 00:07:07.038 --rc geninfo_unexecuted_blocks=1 00:07:07.038 00:07:07.038 ' 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.038 --rc genhtml_branch_coverage=1 00:07:07.038 --rc genhtml_function_coverage=1 00:07:07.038 --rc genhtml_legend=1 00:07:07.038 --rc geninfo_all_blocks=1 00:07:07.038 --rc geninfo_unexecuted_blocks=1 00:07:07.038 00:07:07.038 ' 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.038 --rc genhtml_branch_coverage=1 00:07:07.038 --rc genhtml_function_coverage=1 00:07:07.038 --rc genhtml_legend=1 00:07:07.038 --rc geninfo_all_blocks=1 00:07:07.038 --rc geninfo_unexecuted_blocks=1 00:07:07.038 00:07:07.038 ' 00:07:07.038 12:21:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:07.038 12:21:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:07.038 12:21:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:07.038 12:21:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.038 12:21:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.038 ************************************ 00:07:07.038 START TEST default_locks 00:07:07.038 ************************************ 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=733451 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 733451 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 733451 ']' 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.038 12:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.038 [2024-11-20 12:21:12.747052] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:07.038 [2024-11-20 12:21:12.747093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733451 ] 00:07:07.297 [2024-11-20 12:21:12.816070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.297 [2024-11-20 12:21:12.855132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.556 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.556 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:07.556 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 733451 00:07:07.556 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 733451 00:07:07.556 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.815 lslocks: write error 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 733451 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 733451 ']' 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 733451 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733451 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733451' 00:07:07.815 killing process with pid 733451 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 733451 00:07:07.815 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 733451 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 733451 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 733451 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 733451 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 733451 ']' 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (733451) - No such process 00:07:08.075 ERROR: process (pid: 733451) is no longer running 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.075 00:07:08.075 real 0m1.063s 00:07:08.075 user 0m1.007s 00:07:08.075 sys 0m0.504s 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.075 12:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.075 ************************************ 00:07:08.075 END TEST default_locks 00:07:08.075 ************************************ 00:07:08.075 12:21:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:08.075 12:21:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.075 12:21:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.075 12:21:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.075 ************************************ 00:07:08.075 START TEST default_locks_via_rpc 00:07:08.075 ************************************ 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=733668 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 733668 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 733668 ']' 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.075 12:21:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.334 [2024-11-20 12:21:13.882653] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:08.334 [2024-11-20 12:21:13.882696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733668 ] 00:07:08.334 [2024-11-20 12:21:13.953587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.334 [2024-11-20 12:21:13.992684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 733668 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 733668 00:07:08.594 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 733668 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 733668 ']' 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 733668 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733668 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733668' 00:07:08.853 killing process with pid 733668 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 733668 00:07:08.853 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 733668 00:07:09.113 00:07:09.113 real 0m0.939s 00:07:09.113 user 0m0.870s 00:07:09.113 sys 0m0.445s 00:07:09.113 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.113 12:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.113 ************************************ 00:07:09.113 END TEST default_locks_via_rpc 00:07:09.113 ************************************ 00:07:09.113 12:21:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:09.113 12:21:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.113 12:21:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.113 12:21:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.113 ************************************ 00:07:09.113 START TEST non_locking_app_on_locked_coremask 00:07:09.113 ************************************ 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=733789 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 733789 /var/tmp/spdk.sock 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 733789 ']' 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.113 12:21:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.372 [2024-11-20 12:21:14.894373] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:09.372 [2024-11-20 12:21:14.894422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733789 ] 00:07:09.372 [2024-11-20 12:21:14.964417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.372 [2024-11-20 12:21:15.003217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=734041 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 734041 /var/tmp/spdk2.sock 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 734041 ']' 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.941 12:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.200 [2024-11-20 12:21:15.739533] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:10.200 [2024-11-20 12:21:15.739577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734041 ] 00:07:10.200 [2024-11-20 12:21:15.815320] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.200 [2024-11-20 12:21:15.815342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.200 [2024-11-20 12:21:15.892700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.138 12:21:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.138 12:21:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.138 12:21:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 733789 00:07:11.138 12:21:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 733789 00:07:11.138 12:21:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.397 lslocks: write error 00:07:11.397 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 733789 00:07:11.397 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 733789 ']' 00:07:11.397 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 733789 00:07:11.397 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:11.397 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.397 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733789 00:07:11.656 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.656 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.656 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733789' 00:07:11.656 killing process with pid 733789 00:07:11.656 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 733789 00:07:11.656 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 733789 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 734041 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 734041 ']' 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 734041 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734041 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734041' 00:07:12.224 killing process with pid 734041 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 734041 00:07:12.224 12:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 734041 00:07:12.484 00:07:12.484 real 0m3.284s 00:07:12.484 user 0m3.528s 00:07:12.484 sys 0m0.948s 00:07:12.484 12:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.484 12:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.484 ************************************ 00:07:12.484 END TEST non_locking_app_on_locked_coremask 00:07:12.484 ************************************ 00:07:12.484 12:21:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:12.484 12:21:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.484 12:21:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.484 12:21:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.484 ************************************ 00:07:12.484 START TEST locking_app_on_unlocked_coremask 00:07:12.484 ************************************ 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=734551 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 734551 /var/tmp/spdk.sock 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 734551 ']' 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.484 12:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.744 [2024-11-20 12:21:18.248230] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:12.744 [2024-11-20 12:21:18.248285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734551 ] 00:07:12.744 [2024-11-20 12:21:18.321441] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.744 [2024-11-20 12:21:18.321468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.744 [2024-11-20 12:21:18.357674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=734618 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 734618 /var/tmp/spdk2.sock 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 734618 ']' 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.313 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.577 [2024-11-20 12:21:19.103687] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:13.578 [2024-11-20 12:21:19.103732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734618 ] 00:07:13.578 [2024-11-20 12:21:19.189026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.578 [2024-11-20 12:21:19.267256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.516 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.516 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.516 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 734618 00:07:14.516 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 734618 00:07:14.516 12:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.084 lslocks: write error 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 734551 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 734551 ']' 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 734551 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734551 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734551' 00:07:15.084 killing process with pid 734551 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 734551 00:07:15.084 12:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 734551 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 734618 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 734618 ']' 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 734618 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734618 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734618' 00:07:15.653 killing process with pid 734618 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 734618 00:07:15.653 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 734618 00:07:15.913 00:07:15.913 real 0m3.350s 00:07:15.913 user 0m3.608s 00:07:15.913 sys 0m0.991s 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.913 ************************************ 00:07:15.913 END TEST locking_app_on_unlocked_coremask 00:07:15.913 ************************************ 00:07:15.913 12:21:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:15.913 12:21:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.913 12:21:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.913 12:21:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.913 ************************************ 00:07:15.913 START TEST locking_app_on_locked_coremask 00:07:15.913 ************************************ 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=735170 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 735170 /var/tmp/spdk.sock 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 735170 ']' 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.913 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.913 [2024-11-20 12:21:21.660445] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:15.913 [2024-11-20 12:21:21.660483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735170 ] 00:07:16.172 [2024-11-20 12:21:21.734941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.172 [2024-11-20 12:21:21.769513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=735177 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 735177 /var/tmp/spdk2.sock 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 735177 /var/tmp/spdk2.sock 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 735177 /var/tmp/spdk2.sock 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 735177 ']' 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.431 12:21:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.431 [2024-11-20 12:21:22.032303] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:16.431 [2024-11-20 12:21:22.032346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735177 ] 00:07:16.431 [2024-11-20 12:21:22.112342] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 735170 has claimed it. 00:07:16.431 [2024-11-20 12:21:22.112379] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (735177) - No such process 00:07:17.016 ERROR: process (pid: 735177) is no longer running 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 735170 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 735170 00:07:17.016 12:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.584 lslocks: write error 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 735170 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 735170 ']' 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 735170 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 735170 00:07:17.584 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.585 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.585 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 735170' 00:07:17.585 killing process with pid 735170 00:07:17.585 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 735170 00:07:17.585 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 735170 00:07:17.844 00:07:17.844 real 0m1.813s 00:07:17.844 user 0m1.954s 00:07:17.844 sys 0m0.609s 00:07:17.844 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.844 12:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.844 ************************************ 00:07:17.844 END TEST locking_app_on_locked_coremask 00:07:17.844 ************************************ 00:07:17.844 12:21:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.844 12:21:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.844 12:21:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.844 12:21:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.844 ************************************ 00:07:17.844 START TEST locking_overlapped_coremask 00:07:17.844 ************************************ 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=735476 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 735476 /var/tmp/spdk.sock 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 735476 ']' 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.844 12:21:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.844 [2024-11-20 12:21:23.544428] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:17.844 [2024-11-20 12:21:23.544467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735476 ] 00:07:18.103 [2024-11-20 12:21:23.614911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.103 [2024-11-20 12:21:23.656632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.103 [2024-11-20 12:21:23.656765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.103 [2024-11-20 12:21:23.656766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=735736 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 735736 /var/tmp/spdk2.sock 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 735736 /var/tmp/spdk2.sock 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 735736 /var/tmp/spdk2.sock 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 735736 ']' 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.672 12:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.672 [2024-11-20 12:21:24.410135] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:18.672 [2024-11-20 12:21:24.410180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735736 ] 00:07:18.931 [2024-11-20 12:21:24.493305] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 735476 has claimed it. 00:07:18.931 [2024-11-20 12:21:24.493338] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (735736) - No such process 00:07:19.499 ERROR: process (pid: 735736) is no longer running 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 735476 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 735476 ']' 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 735476 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 735476 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 735476' 00:07:19.499 killing process with pid 735476 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 735476 00:07:19.499 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 735476 00:07:19.759 00:07:19.759 real 0m1.893s 00:07:19.759 user 0m5.466s 00:07:19.759 sys 0m0.417s 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.759 ************************************ 00:07:19.759 END TEST locking_overlapped_coremask 00:07:19.759 ************************************ 00:07:19.759 12:21:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.759 12:21:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.759 12:21:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.759 12:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.759 ************************************ 00:07:19.759 START TEST locking_overlapped_coremask_via_rpc 00:07:19.759 ************************************ 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=736023 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 736023 /var/tmp/spdk.sock 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 736023 ']' 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.759 12:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.759 [2024-11-20 12:21:25.508469] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:19.759 [2024-11-20 12:21:25.508508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736023 ] 00:07:20.019 [2024-11-20 12:21:25.579612] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.019 [2024-11-20 12:21:25.579639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.019 [2024-11-20 12:21:25.618035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.019 [2024-11-20 12:21:25.618147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.019 [2024-11-20 12:21:25.618148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=736047 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 736047 /var/tmp/spdk2.sock 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 736047 ']' 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.587 12:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.847 [2024-11-20 12:21:26.360510] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:20.847 [2024-11-20 12:21:26.360553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736047 ] 00:07:20.847 [2024-11-20 12:21:26.443001] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.847 [2024-11-20 12:21:26.443031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.847 [2024-11-20 12:21:26.528758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.847 [2024-11-20 12:21:26.532463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.847 [2024-11-20 12:21:26.532464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.415 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.675 [2024-11-20 12:21:27.179484] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 736023 has claimed it. 00:07:21.675 request: 00:07:21.675 { 00:07:21.675 "method": "framework_enable_cpumask_locks", 00:07:21.675 "req_id": 1 00:07:21.675 } 00:07:21.675 Got JSON-RPC error response 00:07:21.675 response: 00:07:21.675 { 00:07:21.675 "code": -32603, 00:07:21.675 "message": "Failed to claim CPU core: 2" 00:07:21.675 } 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 736023 /var/tmp/spdk.sock 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 736023 ']' 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 736047 /var/tmp/spdk2.sock 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 736047 ']' 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.675 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.936 00:07:21.936 real 0m2.092s 00:07:21.936 user 0m0.876s 00:07:21.936 sys 0m0.156s 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.936 12:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.936 ************************************ 00:07:21.936 END TEST locking_overlapped_coremask_via_rpc 00:07:21.936 ************************************ 00:07:21.936 12:21:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.936 12:21:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 736023 ]] 00:07:21.936 12:21:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 736023 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 736023 ']' 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 736023 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 736023 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.936 12:21:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 736023' 00:07:21.936 killing process with pid 736023 00:07:21.937 12:21:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 736023 00:07:21.937 12:21:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 736023 00:07:22.196 12:21:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 736047 ]] 00:07:22.196 12:21:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 736047 00:07:22.196 12:21:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 736047 ']' 00:07:22.196 12:21:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 736047 00:07:22.196 12:21:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.196 12:21:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.196 12:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 736047 00:07:22.455 12:21:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:22.455 12:21:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:22.455 12:21:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 736047' 00:07:22.455 killing process with pid 736047 00:07:22.455 12:21:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 736047 00:07:22.455 12:21:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 736047 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 736023 ]] 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 736023 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 736023 ']' 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 736023 00:07:22.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (736023) - No such process 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 736023 is not found' 00:07:22.715 Process with pid 736023 is not found 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 736047 ]] 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 736047 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 736047 ']' 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 736047 00:07:22.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (736047) - No such process 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 736047 is not found' 00:07:22.715 Process with pid 736047 is not found 00:07:22.715 12:21:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.715 00:07:22.715 real 0m15.800s 00:07:22.715 user 0m28.027s 00:07:22.715 sys 0m4.997s 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.715 12:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 ************************************ 00:07:22.715 END TEST cpu_locks 00:07:22.715 ************************************ 00:07:22.715 00:07:22.715 real 0m40.825s 00:07:22.715 user 1m19.196s 00:07:22.715 sys 0m8.475s 00:07:22.715 12:21:28 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.715 12:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 ************************************ 00:07:22.715 END TEST event 00:07:22.715 ************************************ 00:07:22.715 12:21:28 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.715 12:21:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.715 12:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.715 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 ************************************ 00:07:22.715 START TEST thread 00:07:22.715 ************************************ 00:07:22.715 12:21:28 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.975 * Looking for test storage... 00:07:22.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.975 12:21:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.975 12:21:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.975 12:21:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.975 12:21:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.975 12:21:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.975 12:21:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.975 12:21:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.975 12:21:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.975 12:21:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.975 12:21:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.975 12:21:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.975 12:21:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:22.975 12:21:28 thread -- scripts/common.sh@345 -- # : 1 00:07:22.975 12:21:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.975 12:21:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.975 12:21:28 thread -- scripts/common.sh@365 -- # decimal 1 00:07:22.975 12:21:28 thread -- scripts/common.sh@353 -- # local d=1 00:07:22.975 12:21:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.975 12:21:28 thread -- scripts/common.sh@355 -- # echo 1 00:07:22.975 12:21:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.975 12:21:28 thread -- scripts/common.sh@366 -- # decimal 2 00:07:22.975 12:21:28 thread -- scripts/common.sh@353 -- # local d=2 00:07:22.975 12:21:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.975 12:21:28 thread -- scripts/common.sh@355 -- # echo 2 00:07:22.975 12:21:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.975 12:21:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.975 12:21:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.975 12:21:28 thread -- scripts/common.sh@368 -- # return 0 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.975 --rc genhtml_branch_coverage=1 00:07:22.975 --rc genhtml_function_coverage=1 00:07:22.975 --rc genhtml_legend=1 00:07:22.975 --rc geninfo_all_blocks=1 00:07:22.975 --rc geninfo_unexecuted_blocks=1 00:07:22.975 00:07:22.975 ' 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.975 --rc genhtml_branch_coverage=1 00:07:22.975 --rc genhtml_function_coverage=1 00:07:22.975 --rc genhtml_legend=1 00:07:22.975 --rc geninfo_all_blocks=1 00:07:22.975 --rc geninfo_unexecuted_blocks=1 00:07:22.975 00:07:22.975 ' 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.975 --rc genhtml_branch_coverage=1 00:07:22.975 --rc genhtml_function_coverage=1 00:07:22.975 --rc genhtml_legend=1 00:07:22.975 --rc geninfo_all_blocks=1 00:07:22.975 --rc geninfo_unexecuted_blocks=1 00:07:22.975 00:07:22.975 ' 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.975 --rc genhtml_branch_coverage=1 00:07:22.975 --rc genhtml_function_coverage=1 00:07:22.975 --rc genhtml_legend=1 00:07:22.975 --rc geninfo_all_blocks=1 00:07:22.975 --rc geninfo_unexecuted_blocks=1 00:07:22.975 00:07:22.975 ' 00:07:22.975 12:21:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.975 12:21:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.975 ************************************ 00:07:22.975 START TEST thread_poller_perf 00:07:22.975 ************************************ 00:07:22.975 12:21:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.975 [2024-11-20 12:21:28.626265] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:22.975 [2024-11-20 12:21:28.626351] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736668 ] 00:07:22.975 [2024-11-20 12:21:28.700844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.234 [2024-11-20 12:21:28.738365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.234 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.171 [2024-11-20T11:21:29.935Z] ====================================== 00:07:24.171 [2024-11-20T11:21:29.935Z] busy:2205737584 (cyc) 00:07:24.171 [2024-11-20T11:21:29.935Z] total_run_count: 439000 00:07:24.171 [2024-11-20T11:21:29.935Z] tsc_hz: 2200000000 (cyc) 00:07:24.171 [2024-11-20T11:21:29.935Z] ====================================== 00:07:24.171 [2024-11-20T11:21:29.935Z] poller_cost: 5024 (cyc), 2283 (nsec) 00:07:24.171 00:07:24.171 real 0m1.177s 00:07:24.171 user 0m1.100s 00:07:24.171 sys 0m0.074s 00:07:24.171 12:21:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.171 12:21:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.171 ************************************ 00:07:24.171 END TEST thread_poller_perf 00:07:24.171 ************************************ 00:07:24.171 12:21:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.171 12:21:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:24.171 12:21:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.171 12:21:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.171 ************************************ 00:07:24.171 START TEST thread_poller_perf 00:07:24.171 ************************************ 00:07:24.171 12:21:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.171 [2024-11-20 12:21:29.874280] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:24.171 [2024-11-20 12:21:29.874358] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736950 ] 00:07:24.429 [2024-11-20 12:21:29.949455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.429 [2024-11-20 12:21:29.986009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.429 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.366 [2024-11-20T11:21:31.130Z] ====================================== 00:07:25.366 [2024-11-20T11:21:31.130Z] busy:2201538030 (cyc) 00:07:25.366 [2024-11-20T11:21:31.130Z] total_run_count: 5897000 00:07:25.366 [2024-11-20T11:21:31.130Z] tsc_hz: 2200000000 (cyc) 00:07:25.366 [2024-11-20T11:21:31.130Z] ====================================== 00:07:25.366 [2024-11-20T11:21:31.130Z] poller_cost: 373 (cyc), 169 (nsec) 00:07:25.366 00:07:25.366 real 0m1.172s 00:07:25.366 user 0m1.095s 00:07:25.366 sys 0m0.073s 00:07:25.366 12:21:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.366 12:21:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.366 ************************************ 00:07:25.366 END TEST thread_poller_perf 00:07:25.366 ************************************ 00:07:25.366 12:21:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.366 00:07:25.366 real 0m2.661s 00:07:25.366 user 0m2.350s 00:07:25.366 sys 0m0.326s 00:07:25.366 12:21:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.366 12:21:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.366 ************************************ 00:07:25.366 END TEST thread 00:07:25.366 ************************************ 00:07:25.366 12:21:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:25.366 12:21:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:25.366 12:21:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.366 12:21:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.366 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:25.625 ************************************ 00:07:25.625 START TEST app_cmdline 00:07:25.625 ************************************ 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:25.625 * Looking for test storage... 00:07:25.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.625 12:21:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.625 12:21:31 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.625 --rc genhtml_branch_coverage=1 00:07:25.626 --rc genhtml_function_coverage=1 00:07:25.626 --rc genhtml_legend=1 00:07:25.626 --rc geninfo_all_blocks=1 00:07:25.626 --rc geninfo_unexecuted_blocks=1 00:07:25.626 00:07:25.626 ' 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.626 --rc genhtml_branch_coverage=1 00:07:25.626 --rc genhtml_function_coverage=1 00:07:25.626 --rc genhtml_legend=1 00:07:25.626 --rc geninfo_all_blocks=1 00:07:25.626 --rc geninfo_unexecuted_blocks=1 00:07:25.626 00:07:25.626 ' 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.626 --rc genhtml_branch_coverage=1 00:07:25.626 --rc genhtml_function_coverage=1 00:07:25.626 --rc genhtml_legend=1 00:07:25.626 --rc geninfo_all_blocks=1 00:07:25.626 --rc geninfo_unexecuted_blocks=1 00:07:25.626 00:07:25.626 ' 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.626 --rc genhtml_branch_coverage=1 00:07:25.626 --rc genhtml_function_coverage=1 00:07:25.626 --rc genhtml_legend=1 00:07:25.626 --rc geninfo_all_blocks=1 00:07:25.626 --rc geninfo_unexecuted_blocks=1 00:07:25.626 00:07:25.626 ' 00:07:25.626 12:21:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:25.626 12:21:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=737281 00:07:25.626 12:21:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 737281 00:07:25.626 12:21:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 737281 ']' 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.626 12:21:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.626 [2024-11-20 12:21:31.362591] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:25.626 [2024-11-20 12:21:31.362636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737281 ] 00:07:25.885 [2024-11-20 12:21:31.435214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.885 [2024-11-20 12:21:31.474500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.453 12:21:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.453 12:21:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:26.454 12:21:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:26.713 { 00:07:26.713 "version": "SPDK v25.01-pre git sha1 f86091626", 00:07:26.713 "fields": { 00:07:26.713 "major": 25, 00:07:26.713 "minor": 1, 00:07:26.713 "patch": 0, 00:07:26.713 "suffix": "-pre", 00:07:26.713 "commit": "f86091626" 00:07:26.713 } 00:07:26.713 } 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:26.713 12:21:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:26.713 12:21:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.713 12:21:32 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:26.713 12:21:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.713 12:21:32 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:26.713 12:21:32 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.713 12:21:32 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:26.714 12:21:32 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.973 request: 00:07:26.973 { 00:07:26.973 "method": "env_dpdk_get_mem_stats", 00:07:26.973 "req_id": 1 00:07:26.973 } 00:07:26.973 Got JSON-RPC error response 00:07:26.973 response: 00:07:26.973 { 00:07:26.973 "code": -32601, 00:07:26.973 "message": "Method not found" 00:07:26.973 } 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.973 12:21:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 737281 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 737281 ']' 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 737281 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 737281 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 737281' 00:07:26.973 killing process with pid 737281 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@973 -- # kill 737281 00:07:26.973 12:21:32 app_cmdline -- common/autotest_common.sh@978 -- # wait 737281 00:07:27.230 00:07:27.230 real 0m1.793s 00:07:27.230 user 0m2.119s 00:07:27.230 sys 0m0.486s 00:07:27.230 12:21:32 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.230 12:21:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.230 ************************************ 00:07:27.230 END TEST app_cmdline 00:07:27.230 ************************************ 00:07:27.230 12:21:32 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:27.230 12:21:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.230 12:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.230 12:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:27.489 ************************************ 00:07:27.489 START TEST version 00:07:27.489 ************************************ 00:07:27.489 12:21:32 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:27.489 * Looking for test storage... 00:07:27.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.489 12:21:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.489 12:21:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.489 12:21:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.489 12:21:33 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.489 12:21:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.489 12:21:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.489 12:21:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.490 12:21:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.490 12:21:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.490 12:21:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.490 12:21:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.490 12:21:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.490 12:21:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.490 12:21:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.490 12:21:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.490 12:21:33 version -- scripts/common.sh@344 -- # case "$op" in 00:07:27.490 12:21:33 version -- scripts/common.sh@345 -- # : 1 00:07:27.490 12:21:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.490 12:21:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.490 12:21:33 version -- scripts/common.sh@365 -- # decimal 1 00:07:27.490 12:21:33 version -- scripts/common.sh@353 -- # local d=1 00:07:27.490 12:21:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.490 12:21:33 version -- scripts/common.sh@355 -- # echo 1 00:07:27.490 12:21:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.490 12:21:33 version -- scripts/common.sh@366 -- # decimal 2 00:07:27.490 12:21:33 version -- scripts/common.sh@353 -- # local d=2 00:07:27.490 12:21:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.490 12:21:33 version -- scripts/common.sh@355 -- # echo 2 00:07:27.490 12:21:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.490 12:21:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.490 12:21:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.490 12:21:33 version -- scripts/common.sh@368 -- # return 0 00:07:27.490 12:21:33 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.490 12:21:33 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.490 --rc genhtml_branch_coverage=1 00:07:27.490 --rc genhtml_function_coverage=1 00:07:27.490 --rc genhtml_legend=1 00:07:27.490 --rc geninfo_all_blocks=1 00:07:27.490 --rc geninfo_unexecuted_blocks=1 00:07:27.490 00:07:27.490 ' 00:07:27.490 12:21:33 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.490 --rc genhtml_branch_coverage=1 00:07:27.490 --rc genhtml_function_coverage=1 00:07:27.490 --rc genhtml_legend=1 00:07:27.490 --rc geninfo_all_blocks=1 00:07:27.490 --rc geninfo_unexecuted_blocks=1 00:07:27.490 00:07:27.490 ' 00:07:27.490 12:21:33 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.490 --rc genhtml_branch_coverage=1 00:07:27.490 --rc genhtml_function_coverage=1 00:07:27.490 --rc genhtml_legend=1 00:07:27.490 --rc geninfo_all_blocks=1 00:07:27.490 --rc geninfo_unexecuted_blocks=1 00:07:27.490 00:07:27.490 ' 00:07:27.490 12:21:33 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.490 --rc genhtml_branch_coverage=1 00:07:27.490 --rc genhtml_function_coverage=1 00:07:27.490 --rc genhtml_legend=1 00:07:27.490 --rc geninfo_all_blocks=1 00:07:27.490 --rc geninfo_unexecuted_blocks=1 00:07:27.490 00:07:27.490 ' 00:07:27.490 12:21:33 version -- app/version.sh@17 -- # get_header_version major 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:27.490 12:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.490 12:21:33 version -- app/version.sh@17 -- # major=25 00:07:27.490 12:21:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.490 12:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.490 12:21:33 version -- app/version.sh@18 -- # minor=1 00:07:27.490 12:21:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.490 12:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.490 12:21:33 version -- app/version.sh@19 -- # patch=0 00:07:27.490 12:21:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.490 12:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:27.490 12:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.490 12:21:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.490 12:21:33 version -- app/version.sh@22 -- # version=25.1 00:07:27.490 12:21:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.490 12:21:33 version -- app/version.sh@28 -- # version=25.1rc0 00:07:27.490 12:21:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:27.490 12:21:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.490 12:21:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:27.490 12:21:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:27.490 00:07:27.490 real 0m0.240s 00:07:27.490 user 0m0.151s 00:07:27.490 sys 0m0.130s 00:07:27.490 12:21:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.490 12:21:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.490 ************************************ 00:07:27.490 END TEST version 00:07:27.490 ************************************ 00:07:27.750 12:21:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:27.750 12:21:33 -- spdk/autotest.sh@194 -- # uname -s 00:07:27.750 12:21:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:27.750 12:21:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.750 12:21:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.750 12:21:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:27.750 12:21:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.750 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 12:21:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:27.750 12:21:33 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:27.750 12:21:33 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:27.750 12:21:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.750 12:21:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.750 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 ************************************ 00:07:27.750 START TEST nvmf_tcp 00:07:27.750 ************************************ 00:07:27.750 12:21:33 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:27.750 * Looking for test storage... 00:07:27.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:27.750 12:21:33 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.750 12:21:33 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.750 12:21:33 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.750 12:21:33 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.750 12:21:33 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.010 12:21:33 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:28.010 12:21:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:28.010 12:21:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.010 12:21:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.010 ************************************ 00:07:28.010 START TEST nvmf_target_core 00:07:28.010 ************************************ 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:28.010 * Looking for test storage... 00:07:28.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.010 --rc genhtml_function_coverage=1 00:07:28.010 --rc genhtml_legend=1 00:07:28.010 --rc geninfo_all_blocks=1 00:07:28.010 --rc geninfo_unexecuted_blocks=1 00:07:28.010 00:07:28.010 ' 00:07:28.010 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.010 --rc genhtml_branch_coverage=1 00:07:28.011 --rc genhtml_function_coverage=1 00:07:28.011 --rc genhtml_legend=1 00:07:28.011 --rc geninfo_all_blocks=1 00:07:28.011 --rc geninfo_unexecuted_blocks=1 00:07:28.011 00:07:28.011 ' 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.011 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.322 ************************************ 00:07:28.322 START TEST nvmf_abort 00:07:28.322 ************************************ 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:28.322 * Looking for test storage... 00:07:28.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.322 --rc genhtml_branch_coverage=1 00:07:28.322 --rc genhtml_function_coverage=1 00:07:28.322 --rc genhtml_legend=1 00:07:28.322 --rc geninfo_all_blocks=1 00:07:28.322 --rc geninfo_unexecuted_blocks=1 00:07:28.322 00:07:28.322 ' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.322 --rc genhtml_branch_coverage=1 00:07:28.322 --rc genhtml_function_coverage=1 00:07:28.322 --rc genhtml_legend=1 00:07:28.322 --rc geninfo_all_blocks=1 00:07:28.322 --rc geninfo_unexecuted_blocks=1 00:07:28.322 00:07:28.322 ' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.322 --rc genhtml_branch_coverage=1 00:07:28.322 --rc genhtml_function_coverage=1 00:07:28.322 --rc genhtml_legend=1 00:07:28.322 --rc geninfo_all_blocks=1 00:07:28.322 --rc geninfo_unexecuted_blocks=1 00:07:28.322 00:07:28.322 ' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.322 --rc genhtml_branch_coverage=1 00:07:28.322 --rc genhtml_function_coverage=1 00:07:28.322 --rc genhtml_legend=1 00:07:28.322 --rc geninfo_all_blocks=1 00:07:28.322 --rc geninfo_unexecuted_blocks=1 00:07:28.322 00:07:28.322 ' 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.322 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.323 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.323 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.928 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:07:34.929 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:07:34.929 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:07:34.929 Found net devices under 0000:1a:00.0: cvl_0_0 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:07:34.929 Found net devices under 0000:1a:00.1: cvl_0_1 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.929 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:07:34.929 00:07:34.929 --- 10.0.0.2 ping statistics --- 00:07:34.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.929 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:07:34.929 00:07:34.929 --- 10.0.0.1 ping statistics --- 00:07:34.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.929 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=741209 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 741209 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 741209 ']' 00:07:34.929 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.930 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.930 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.930 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.930 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.930 [2024-11-20 12:21:40.310691] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:34.930 [2024-11-20 12:21:40.310737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.930 [2024-11-20 12:21:40.387035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.930 [2024-11-20 12:21:40.425955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.930 [2024-11-20 12:21:40.425992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.930 [2024-11-20 12:21:40.425998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.930 [2024-11-20 12:21:40.426003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.930 [2024-11-20 12:21:40.426008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.930 [2024-11-20 12:21:40.427405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.930 [2024-11-20 12:21:40.427492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.930 [2024-11-20 12:21:40.427493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.497 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.497 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:35.497 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.497 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.497 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.497 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 [2024-11-20 12:21:41.171667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 Malloc0 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 Delay0 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 [2024-11-20 12:21:41.247295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.757 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.757 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:35.757 [2024-11-20 12:21:41.342992] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:37.660 Initializing NVMe Controllers 00:07:37.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.660 controller IO queue size 128 less than required 00:07:37.660 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:37.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:37.660 Initialization complete. Launching workers. 00:07:37.660 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42061 00:07:37.660 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42122, failed to submit 62 00:07:37.660 success 42065, unsuccessful 57, failed 0 00:07:37.660 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.660 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.660 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.919 rmmod nvme_tcp 00:07:37.919 rmmod nvme_fabrics 00:07:37.919 rmmod nvme_keyring 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 741209 ']' 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 741209 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 741209 ']' 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 741209 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 741209 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 741209' 00:07:37.919 killing process with pid 741209 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 741209 00:07:37.919 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 741209 00:07:38.178 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.178 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.178 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.178 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:38.178 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.179 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.084 00:07:40.084 real 0m11.977s 00:07:40.084 user 0m13.503s 00:07:40.084 sys 0m5.536s 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.084 ************************************ 00:07:40.084 END TEST nvmf_abort 00:07:40.084 ************************************ 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.084 12:21:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.344 ************************************ 00:07:40.344 START TEST nvmf_ns_hotplug_stress 00:07:40.344 ************************************ 00:07:40.344 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:40.344 * Looking for test storage... 00:07:40.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.344 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.344 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.344 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.344 --rc genhtml_branch_coverage=1 00:07:40.344 --rc genhtml_function_coverage=1 00:07:40.344 --rc genhtml_legend=1 00:07:40.344 --rc geninfo_all_blocks=1 00:07:40.344 --rc geninfo_unexecuted_blocks=1 00:07:40.344 00:07:40.344 ' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.344 --rc genhtml_branch_coverage=1 00:07:40.344 --rc genhtml_function_coverage=1 00:07:40.344 --rc genhtml_legend=1 00:07:40.344 --rc geninfo_all_blocks=1 00:07:40.344 --rc geninfo_unexecuted_blocks=1 00:07:40.344 00:07:40.344 ' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.344 --rc genhtml_branch_coverage=1 00:07:40.344 --rc genhtml_function_coverage=1 00:07:40.344 --rc genhtml_legend=1 00:07:40.344 --rc geninfo_all_blocks=1 00:07:40.344 --rc geninfo_unexecuted_blocks=1 00:07:40.344 00:07:40.344 ' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.344 --rc genhtml_branch_coverage=1 00:07:40.344 --rc genhtml_function_coverage=1 00:07:40.344 --rc genhtml_legend=1 00:07:40.344 --rc geninfo_all_blocks=1 00:07:40.344 --rc geninfo_unexecuted_blocks=1 00:07:40.344 00:07:40.344 ' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.344 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.345 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:07:46.914 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:07:46.914 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:07:46.914 Found net devices under 0000:1a:00.0: cvl_0_0 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:07:46.914 Found net devices under 0000:1a:00.1: cvl_0_1 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:46.914 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:46.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:07:46.915 00:07:46.915 --- 10.0.0.2 ping statistics --- 00:07:46.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.915 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:46.915 00:07:46.915 --- 10.0.0.1 ping statistics --- 00:07:46.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.915 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=745551 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 745551 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 745551 ']' 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.915 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 [2024-11-20 12:21:52.408877] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:46.915 [2024-11-20 12:21:52.408924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.915 [2024-11-20 12:21:52.486559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.915 [2024-11-20 12:21:52.525646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.915 [2024-11-20 12:21:52.525684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.915 [2024-11-20 12:21:52.525690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.915 [2024-11-20 12:21:52.525695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.915 [2024-11-20 12:21:52.525699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.915 [2024-11-20 12:21:52.527173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.915 [2024-11-20 12:21:52.527287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.915 [2024-11-20 12:21:52.527288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.482 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.482 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:47.482 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.482 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.482 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.740 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.740 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:47.741 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.741 [2024-11-20 12:21:53.412101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.741 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:47.999 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.258 [2024-11-20 12:21:53.777429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.258 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.258 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:48.516 Malloc0 00:07:48.516 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.775 Delay0 00:07:48.775 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.034 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:49.034 NULL1 00:07:49.034 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:49.292 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:49.292 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=746106 00:07:49.292 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:49.292 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.669 Read completed with error (sct=0, sc=11) 00:07:50.669 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.669 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:50.669 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:50.928 true 00:07:50.928 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:50.928 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.865 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.865 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:51.865 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:52.124 true 00:07:52.124 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:52.124 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.124 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.383 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:52.383 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:52.642 true 00:07:52.642 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:52.642 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.902 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.902 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:52.902 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:53.160 true 00:07:53.160 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:53.160 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.099 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.099 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:54.100 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:54.358 true 00:07:54.358 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:54.358 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.617 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.617 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:54.876 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:54.876 true 00:07:54.876 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:54.876 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.136 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.395 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:55.395 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:55.395 true 00:07:55.395 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:55.395 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.652 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.911 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:55.911 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:55.911 true 00:07:55.911 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:55.911 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.289 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.289 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:57.289 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:57.549 true 00:07:57.549 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:57.549 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.485 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.485 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:58.485 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:58.744 true 00:07:58.744 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:58.744 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.003 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.003 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:59.003 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:59.262 true 00:07:59.262 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:07:59.262 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.480 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.480 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:00.480 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:00.738 true 00:08:00.738 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:00.738 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.673 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.673 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:01.673 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:01.932 true 00:08:01.932 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:01.932 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.190 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.190 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:02.190 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:02.449 true 00:08:02.449 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:02.449 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.828 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.828 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:03.828 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:03.828 true 00:08:04.088 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:04.088 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.025 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.025 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:05.025 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:05.025 true 00:08:05.284 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:05.284 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.284 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.553 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:05.553 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:05.553 true 00:08:05.811 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:05.811 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.811 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.070 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:06.070 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:06.329 true 00:08:06.329 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:06.329 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.337 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.337 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:07.337 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:07.337 true 00:08:07.337 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:07.337 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.628 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.887 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:07.887 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:07.887 true 00:08:07.887 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:07.887 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.146 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.490 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:08.490 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:08.490 true 00:08:08.490 12:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:08.490 12:22:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.427 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.427 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:09.427 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:09.686 true 00:08:09.686 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:09.686 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.946 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.205 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:10.205 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:10.205 true 00:08:10.205 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:10.205 12:22:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.585 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.586 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:11.586 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:11.844 true 00:08:11.844 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:11.844 12:22:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.781 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.781 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:12.781 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:13.040 true 00:08:13.040 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:13.040 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.300 12:22:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.300 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:13.300 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:13.561 true 00:08:13.561 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:13.561 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.824 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.082 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:14.082 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:14.082 true 00:08:14.082 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:14.082 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.340 12:22:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.597 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:14.597 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:14.597 true 00:08:14.597 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:14.597 12:22:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.976 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.976 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:15.976 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:16.234 true 00:08:16.234 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:16.234 12:22:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.170 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.170 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:17.170 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:17.430 true 00:08:17.430 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:17.430 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.689 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.689 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:17.689 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:17.948 true 00:08:17.948 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:17.948 12:22:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.327 12:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.327 12:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:19.327 12:22:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:19.327 true 00:08:19.586 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:19.586 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.523 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.523 Initializing NVMe Controllers 00:08:20.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.523 Controller IO queue size 128, less than required. 00:08:20.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.523 Controller IO queue size 128, less than required. 00:08:20.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:20.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:20.523 Initialization complete. Launching workers. 00:08:20.523 ======================================================== 00:08:20.523 Latency(us) 00:08:20.523 Device Information : IOPS MiB/s Average min max 00:08:20.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2081.73 1.02 40130.67 1577.93 1012892.04 00:08:20.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17974.97 8.78 7120.89 1995.68 341303.84 00:08:20.523 ======================================================== 00:08:20.523 Total : 20056.70 9.79 10547.05 1577.93 1012892.04 00:08:20.523 00:08:20.523 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:20.523 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:20.782 true 00:08:20.782 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 746106 00:08:20.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (746106) - No such process 00:08:20.782 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 746106 00:08:20.782 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.782 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:21.300 null0 00:08:21.300 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.300 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.300 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:21.300 null1 00:08:21.300 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.300 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.300 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:21.558 null2 00:08:21.558 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.558 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.558 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:21.817 null3 00:08:21.817 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.817 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.817 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:22.076 null4 00:08:22.076 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:22.076 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.076 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:22.076 null5 00:08:22.076 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:22.076 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.076 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:22.334 null6 00:08:22.334 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:22.334 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.334 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:22.594 null7 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.594 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 752252 752253 752255 752257 752259 752262 752263 752264 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.595 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.855 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.114 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.374 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.632 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.633 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.891 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.150 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.151 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.409 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.409 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.668 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.928 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.187 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.187 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.187 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.187 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.187 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.187 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.188 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.447 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.705 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.705 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.706 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.965 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.224 12:22:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.483 rmmod nvme_tcp 00:08:26.483 rmmod nvme_fabrics 00:08:26.483 rmmod nvme_keyring 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 745551 ']' 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 745551 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 745551 ']' 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 745551 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 745551 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 745551' 00:08:26.483 killing process with pid 745551 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 745551 00:08:26.483 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 745551 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.742 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.743 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.743 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:26.743 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.743 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.743 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.282 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.282 00:08:29.283 real 0m48.600s 00:08:29.283 user 3m14.909s 00:08:29.283 sys 0m15.645s 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.283 ************************************ 00:08:29.283 END TEST nvmf_ns_hotplug_stress 00:08:29.283 ************************************ 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.283 ************************************ 00:08:29.283 START TEST nvmf_delete_subsystem 00:08:29.283 ************************************ 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:29.283 * Looking for test storage... 00:08:29.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.283 --rc genhtml_branch_coverage=1 00:08:29.283 --rc genhtml_function_coverage=1 00:08:29.283 --rc genhtml_legend=1 00:08:29.283 --rc geninfo_all_blocks=1 00:08:29.283 --rc geninfo_unexecuted_blocks=1 00:08:29.283 00:08:29.283 ' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.283 --rc genhtml_branch_coverage=1 00:08:29.283 --rc genhtml_function_coverage=1 00:08:29.283 --rc genhtml_legend=1 00:08:29.283 --rc geninfo_all_blocks=1 00:08:29.283 --rc geninfo_unexecuted_blocks=1 00:08:29.283 00:08:29.283 ' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.283 --rc genhtml_branch_coverage=1 00:08:29.283 --rc genhtml_function_coverage=1 00:08:29.283 --rc genhtml_legend=1 00:08:29.283 --rc geninfo_all_blocks=1 00:08:29.283 --rc geninfo_unexecuted_blocks=1 00:08:29.283 00:08:29.283 ' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.283 --rc genhtml_branch_coverage=1 00:08:29.283 --rc genhtml_function_coverage=1 00:08:29.283 --rc genhtml_legend=1 00:08:29.283 --rc geninfo_all_blocks=1 00:08:29.283 --rc geninfo_unexecuted_blocks=1 00:08:29.283 00:08:29.283 ' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.283 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.284 12:22:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:08:35.857 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:08:35.857 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.857 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:08:35.858 Found net devices under 0000:1a:00.0: cvl_0_0 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:08:35.858 Found net devices under 0000:1a:00.1: cvl_0_1 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:35.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:08:35.858 00:08:35.858 --- 10.0.0.2 ping statistics --- 00:08:35.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.858 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:08:35.858 00:08:35.858 --- 10.0.0.1 ping statistics --- 00:08:35.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.858 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=756981 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 756981 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 756981 ']' 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.858 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.858 [2024-11-20 12:22:41.041658] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:35.858 [2024-11-20 12:22:41.041702] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.858 [2024-11-20 12:22:41.120958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:35.858 [2024-11-20 12:22:41.159343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.858 [2024-11-20 12:22:41.159376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.858 [2024-11-20 12:22:41.159383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.858 [2024-11-20 12:22:41.159388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.858 [2024-11-20 12:22:41.159394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.858 [2024-11-20 12:22:41.160640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.858 [2024-11-20 12:22:41.160641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.118 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.118 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:36.118 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.118 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.118 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.376 [2024-11-20 12:22:41.916874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.376 [2024-11-20 12:22:41.937039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.376 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.377 NULL1 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.377 Delay0 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=757195 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:36.377 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:36.377 [2024-11-20 12:22:42.048726] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:38.281 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.281 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.281 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 starting I/O failed: -6 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 [2024-11-20 12:22:44.162566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d0680 is same with the state(6) to be set 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.540 Write completed with error (sct=0, sc=8) 00:08:38.540 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 [2024-11-20 12:22:44.163481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d02c0 is same with the state(6) to be set 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 starting I/O failed: -6 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 [2024-11-20 12:22:44.167317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f752800d4d0 is same with the state(6) to be set 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Write completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:38.541 Read completed with error (sct=0, sc=8) 00:08:39.478 [2024-11-20 12:22:45.143944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d19a0 is same with the state(6) to be set 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 [2024-11-20 12:22:45.166147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d0860 is same with the state(6) to be set 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Write completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.478 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 [2024-11-20 12:22:45.166258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d04a0 is same with the state(6) to be set 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 [2024-11-20 12:22:45.169513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f752800d020 is same with the state(6) to be set 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Read completed with error (sct=0, sc=8) 00:08:39.479 Write completed with error (sct=0, sc=8) 00:08:39.479 [2024-11-20 12:22:45.169842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f752800d800 is same with the state(6) to be set 00:08:39.479 Initializing NVMe Controllers 00:08:39.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.479 Controller IO queue size 128, less than required. 00:08:39.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:39.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:39.479 Initialization complete. Launching workers. 00:08:39.479 ======================================================== 00:08:39.479 Latency(us) 00:08:39.479 Device Information : IOPS MiB/s Average min max 00:08:39.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.41 0.08 903542.02 556.57 1005820.03 00:08:39.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.93 0.08 919666.51 197.37 1009406.78 00:08:39.479 ======================================================== 00:08:39.479 Total : 324.34 0.16 911443.27 197.37 1009406.78 00:08:39.479 00:08:39.479 [2024-11-20 12:22:45.170321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d19a0 (9): Bad file descriptor 00:08:39.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:39.479 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.479 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:39.479 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 757195 00:08:39.479 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 757195 00:08:40.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (757195) - No such process 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 757195 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 757195 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 757195 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.047 [2024-11-20 12:22:45.700917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.047 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.048 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.048 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=757808 00:08:40.048 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:40.048 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.048 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:40.048 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.048 [2024-11-20 12:22:45.789770] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:40.615 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.615 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:40.615 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.183 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.183 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:41.183 12:22:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.752 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.752 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:41.752 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.011 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.011 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:42.011 12:22:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.582 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.582 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:42.582 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.150 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.150 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:43.150 12:22:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.410 Initializing NVMe Controllers 00:08:43.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.410 Controller IO queue size 128, less than required. 00:08:43.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.410 Initialization complete. Launching workers. 00:08:43.410 ======================================================== 00:08:43.410 Latency(us) 00:08:43.410 Device Information : IOPS MiB/s Average min max 00:08:43.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002687.20 1000108.83 1043910.68 00:08:43.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003769.19 1000147.77 1009529.53 00:08:43.410 ======================================================== 00:08:43.410 Total : 256.00 0.12 1003228.20 1000108.83 1043910.68 00:08:43.410 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 757808 00:08:43.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (757808) - No such process 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 757808 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.669 rmmod nvme_tcp 00:08:43.669 rmmod nvme_fabrics 00:08:43.669 rmmod nvme_keyring 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 756981 ']' 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 756981 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 756981 ']' 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 756981 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756981 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756981' 00:08:43.669 killing process with pid 756981 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 756981 00:08:43.669 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 756981 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.928 12:22:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.466 00:08:46.466 real 0m17.058s 00:08:46.466 user 0m30.582s 00:08:46.466 sys 0m5.746s 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.466 ************************************ 00:08:46.466 END TEST nvmf_delete_subsystem 00:08:46.466 ************************************ 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.466 ************************************ 00:08:46.466 START TEST nvmf_host_management 00:08:46.466 ************************************ 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:46.466 * Looking for test storage... 00:08:46.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:46.466 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.467 --rc genhtml_branch_coverage=1 00:08:46.467 --rc genhtml_function_coverage=1 00:08:46.467 --rc genhtml_legend=1 00:08:46.467 --rc geninfo_all_blocks=1 00:08:46.467 --rc geninfo_unexecuted_blocks=1 00:08:46.467 00:08:46.467 ' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.467 --rc genhtml_branch_coverage=1 00:08:46.467 --rc genhtml_function_coverage=1 00:08:46.467 --rc genhtml_legend=1 00:08:46.467 --rc geninfo_all_blocks=1 00:08:46.467 --rc geninfo_unexecuted_blocks=1 00:08:46.467 00:08:46.467 ' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.467 --rc genhtml_branch_coverage=1 00:08:46.467 --rc genhtml_function_coverage=1 00:08:46.467 --rc genhtml_legend=1 00:08:46.467 --rc geninfo_all_blocks=1 00:08:46.467 --rc geninfo_unexecuted_blocks=1 00:08:46.467 00:08:46.467 ' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.467 --rc genhtml_branch_coverage=1 00:08:46.467 --rc genhtml_function_coverage=1 00:08:46.467 --rc genhtml_legend=1 00:08:46.467 --rc geninfo_all_blocks=1 00:08:46.467 --rc geninfo_unexecuted_blocks=1 00:08:46.467 00:08:46.467 ' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.467 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:08:53.039 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.039 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:08:53.040 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:08:53.040 Found net devices under 0000:1a:00.0: cvl_0_0 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:08:53.040 Found net devices under 0000:1a:00.1: cvl_0_1 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.040 12:22:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:08:53.040 00:08:53.040 --- 10.0.0.2 ping statistics --- 00:08:53.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.040 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:53.040 00:08:53.040 --- 10.0.0.1 ping statistics --- 00:08:53.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.040 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=762360 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 762360 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 762360 ']' 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.040 12:22:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.040 [2024-11-20 12:22:58.246230] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:53.040 [2024-11-20 12:22:58.246280] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.040 [2024-11-20 12:22:58.322504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.040 [2024-11-20 12:22:58.364578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.040 [2024-11-20 12:22:58.364611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.040 [2024-11-20 12:22:58.364617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.040 [2024-11-20 12:22:58.364622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.040 [2024-11-20 12:22:58.364627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.040 [2024-11-20 12:22:58.366348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.040 [2024-11-20 12:22:58.366460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.040 [2024-11-20 12:22:58.366595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.040 [2024-11-20 12:22:58.366596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.609 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.610 [2024-11-20 12:22:59.111652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.610 Malloc0 00:08:53.610 [2024-11-20 12:22:59.189300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=762434 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 762434 /var/tmp/bdevperf.sock 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 762434 ']' 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.610 { 00:08:53.610 "params": { 00:08:53.610 "name": "Nvme$subsystem", 00:08:53.610 "trtype": "$TEST_TRANSPORT", 00:08:53.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.610 "adrfam": "ipv4", 00:08:53.610 "trsvcid": "$NVMF_PORT", 00:08:53.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.610 "hdgst": ${hdgst:-false}, 00:08:53.610 "ddgst": ${ddgst:-false} 00:08:53.610 }, 00:08:53.610 "method": "bdev_nvme_attach_controller" 00:08:53.610 } 00:08:53.610 EOF 00:08:53.610 )") 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:53.610 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.610 "params": { 00:08:53.610 "name": "Nvme0", 00:08:53.610 "trtype": "tcp", 00:08:53.610 "traddr": "10.0.0.2", 00:08:53.610 "adrfam": "ipv4", 00:08:53.610 "trsvcid": "4420", 00:08:53.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:53.610 "hdgst": false, 00:08:53.610 "ddgst": false 00:08:53.610 }, 00:08:53.610 "method": "bdev_nvme_attach_controller" 00:08:53.610 }' 00:08:53.610 [2024-11-20 12:22:59.284528] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:53.610 [2024-11-20 12:22:59.284568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762434 ] 00:08:53.610 [2024-11-20 12:22:59.356201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.871 [2024-11-20 12:22:59.394355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.871 Running I/O for 10 seconds... 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.131 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.131 [2024-11-20 12:22:59.717009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:54.131 [2024-11-20 12:22:59.717039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:54.131 [2024-11-20 12:22:59.717059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:54.131 [2024-11-20 12:22:59.717072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:54.131 [2024-11-20 12:22:59.717084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e940 is same with the state(6) to be set 00:08:54.131 [2024-11-20 12:22:59.717151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.131 [2024-11-20 12:22:59.717159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.131 [2024-11-20 12:22:59.717183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.131 [2024-11-20 12:22:59.717202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.131 [2024-11-20 12:22:59.717213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.131 [2024-11-20 12:22:59.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.132 [2024-11-20 12:22:59.717755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.132 [2024-11-20 12:22:59.717762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.717988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.717996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.718002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.718009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.718015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.718022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.718028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.718035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.133 [2024-11-20 12:22:59.718041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:54.133 [2024-11-20 12:22:59.718922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:54.133 task offset: 32768 on job bdev=Nvme0n1 fails 00:08:54.133 00:08:54.133 Latency(us) 00:08:54.133 [2024-11-20T11:22:59.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.133 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:54.133 Job: Nvme0n1 ended in about 0.12 seconds with error 00:08:54.133 Verification LBA range: start 0x0 length 0x400 00:08:54.133 Nvme0n1 : 0.12 2056.16 128.51 514.04 0.00 23292.36 1318.17 24546.21 00:08:54.133 [2024-11-20T11:22:59.897Z] =================================================================================================================== 00:08:54.133 [2024-11-20T11:22:59.897Z] Total : 2056.16 128.51 514.04 0.00 23292.36 1318.17 24546.21 00:08:54.133 [2024-11-20 12:22:59.720440] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.133 [2024-11-20 12:22:59.720457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1e940 (9): Bad file descriptor 00:08:54.133 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.133 12:22:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:54.133 [2024-11-20 12:22:59.732601] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 762434 00:08:55.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (762434) - No such process 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.070 { 00:08:55.070 "params": { 00:08:55.070 "name": "Nvme$subsystem", 00:08:55.070 "trtype": "$TEST_TRANSPORT", 00:08:55.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.070 "adrfam": "ipv4", 00:08:55.070 "trsvcid": "$NVMF_PORT", 00:08:55.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.070 "hdgst": ${hdgst:-false}, 00:08:55.070 "ddgst": ${ddgst:-false} 00:08:55.070 }, 00:08:55.070 "method": "bdev_nvme_attach_controller" 00:08:55.070 } 00:08:55.070 EOF 00:08:55.070 )") 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:55.070 12:23:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.070 "params": { 00:08:55.070 "name": "Nvme0", 00:08:55.070 "trtype": "tcp", 00:08:55.070 "traddr": "10.0.0.2", 00:08:55.070 "adrfam": "ipv4", 00:08:55.070 "trsvcid": "4420", 00:08:55.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:55.071 "hdgst": false, 00:08:55.071 "ddgst": false 00:08:55.071 }, 00:08:55.071 "method": "bdev_nvme_attach_controller" 00:08:55.071 }' 00:08:55.071 [2024-11-20 12:23:00.779527] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:55.071 [2024-11-20 12:23:00.779574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762728 ] 00:08:55.329 [2024-11-20 12:23:00.856264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.329 [2024-11-20 12:23:00.894424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.329 Running I/O for 1 seconds... 00:08:56.703 2533.00 IOPS, 158.31 MiB/s 00:08:56.703 Latency(us) 00:08:56.703 [2024-11-20T11:23:02.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.703 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:56.703 Verification LBA range: start 0x0 length 0x400 00:08:56.703 Nvme0n1 : 1.01 2571.45 160.72 0.00 0.00 24422.41 2144.81 24069.59 00:08:56.703 [2024-11-20T11:23:02.467Z] =================================================================================================================== 00:08:56.703 [2024-11-20T11:23:02.467Z] Total : 2571.45 160.72 0.00 0.00 24422.41 2144.81 24069.59 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.703 rmmod nvme_tcp 00:08:56.703 rmmod nvme_fabrics 00:08:56.703 rmmod nvme_keyring 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 762360 ']' 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 762360 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 762360 ']' 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 762360 00:08:56.703 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 762360 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 762360' 00:08:56.704 killing process with pid 762360 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 762360 00:08:56.704 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 762360 00:08:56.961 [2024-11-20 12:23:02.532303] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.961 12:23:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.866 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:59.125 00:08:59.125 real 0m12.950s 00:08:59.125 user 0m20.597s 00:08:59.125 sys 0m5.651s 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.125 ************************************ 00:08:59.125 END TEST nvmf_host_management 00:08:59.125 ************************************ 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.125 ************************************ 00:08:59.125 START TEST nvmf_lvol 00:08:59.125 ************************************ 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:59.125 * Looking for test storage... 00:08:59.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.125 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.126 --rc genhtml_branch_coverage=1 00:08:59.126 --rc genhtml_function_coverage=1 00:08:59.126 --rc genhtml_legend=1 00:08:59.126 --rc geninfo_all_blocks=1 00:08:59.126 --rc geninfo_unexecuted_blocks=1 00:08:59.126 00:08:59.126 ' 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.126 --rc genhtml_branch_coverage=1 00:08:59.126 --rc genhtml_function_coverage=1 00:08:59.126 --rc genhtml_legend=1 00:08:59.126 --rc geninfo_all_blocks=1 00:08:59.126 --rc geninfo_unexecuted_blocks=1 00:08:59.126 00:08:59.126 ' 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.126 --rc genhtml_branch_coverage=1 00:08:59.126 --rc genhtml_function_coverage=1 00:08:59.126 --rc genhtml_legend=1 00:08:59.126 --rc geninfo_all_blocks=1 00:08:59.126 --rc geninfo_unexecuted_blocks=1 00:08:59.126 00:08:59.126 ' 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.126 --rc genhtml_branch_coverage=1 00:08:59.126 --rc genhtml_function_coverage=1 00:08:59.126 --rc genhtml_legend=1 00:08:59.126 --rc geninfo_all_blocks=1 00:08:59.126 --rc geninfo_unexecuted_blocks=1 00:08:59.126 00:08:59.126 ' 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.126 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.385 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.386 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:09:05.962 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:09:05.962 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.962 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:09:05.963 Found net devices under 0000:1a:00.0: cvl_0_0 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:09:05.963 Found net devices under 0000:1a:00.1: cvl_0_1 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.963 12:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:09:05.963 00:09:05.963 --- 10.0.0.2 ping statistics --- 00:09:05.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.963 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:09:05.963 00:09:05.963 --- 10.0.0.1 ping statistics --- 00:09:05.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.963 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=766872 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 766872 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 766872 ']' 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.963 12:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.963 [2024-11-20 12:23:11.217958] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:05.963 [2024-11-20 12:23:11.218016] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.963 [2024-11-20 12:23:11.298020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.963 [2024-11-20 12:23:11.338149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.963 [2024-11-20 12:23:11.338182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.964 [2024-11-20 12:23:11.338189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.964 [2024-11-20 12:23:11.338195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.964 [2024-11-20 12:23:11.338199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.964 [2024-11-20 12:23:11.339640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.964 [2024-11-20 12:23:11.339666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.964 [2024-11-20 12:23:11.339667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:06.531 [2024-11-20 12:23:12.228429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.531 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.790 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:06.790 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.049 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:07.049 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:07.308 12:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:07.308 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2cb5dabf-47f3-4c0e-b4e7-110ad45c42f8 00:09:07.308 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2cb5dabf-47f3-4c0e-b4e7-110ad45c42f8 lvol 20 00:09:07.566 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fc2bb146-dbab-4521-88b8-2c5056eb17ac 00:09:07.566 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:07.823 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc2bb146-dbab-4521-88b8-2c5056eb17ac 00:09:08.082 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:08.082 [2024-11-20 12:23:13.773639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.082 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.340 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=767387 00:09:08.340 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:08.340 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:09.276 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fc2bb146-dbab-4521-88b8-2c5056eb17ac MY_SNAPSHOT 00:09:09.535 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=57f187ba-4c05-4d50-a598-cc6464fed02a 00:09:09.535 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fc2bb146-dbab-4521-88b8-2c5056eb17ac 30 00:09:09.794 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 57f187ba-4c05-4d50-a598-cc6464fed02a MY_CLONE 00:09:10.053 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b313eb76-aefe-4f99-9037-8ddce9100686 00:09:10.053 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b313eb76-aefe-4f99-9037-8ddce9100686 00:09:10.621 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 767387 00:09:18.736 Initializing NVMe Controllers 00:09:18.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:18.736 Controller IO queue size 128, less than required. 00:09:18.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:18.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:18.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:18.736 Initialization complete. Launching workers. 00:09:18.736 ======================================================== 00:09:18.736 Latency(us) 00:09:18.736 Device Information : IOPS MiB/s Average min max 00:09:18.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12891.60 50.36 9932.44 959.40 92741.90 00:09:18.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12800.20 50.00 10000.53 3519.07 39583.71 00:09:18.736 ======================================================== 00:09:18.736 Total : 25691.79 100.36 9966.36 959.40 92741.90 00:09:18.736 00:09:18.736 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:18.736 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc2bb146-dbab-4521-88b8-2c5056eb17ac 00:09:18.993 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2cb5dabf-47f3-4c0e-b4e7-110ad45c42f8 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.253 rmmod nvme_tcp 00:09:19.253 rmmod nvme_fabrics 00:09:19.253 rmmod nvme_keyring 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 766872 ']' 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 766872 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 766872 ']' 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 766872 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766872 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766872' 00:09:19.253 killing process with pid 766872 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 766872 00:09:19.253 12:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 766872 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.512 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.048 00:09:22.048 real 0m22.542s 00:09:22.048 user 1m4.310s 00:09:22.048 sys 0m7.453s 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:22.048 ************************************ 00:09:22.048 END TEST nvmf_lvol 00:09:22.048 ************************************ 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.048 ************************************ 00:09:22.048 START TEST nvmf_lvs_grow 00:09:22.048 ************************************ 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:22.048 * Looking for test storage... 00:09:22.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.048 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.048 --rc genhtml_branch_coverage=1 00:09:22.048 --rc genhtml_function_coverage=1 00:09:22.048 --rc genhtml_legend=1 00:09:22.049 --rc geninfo_all_blocks=1 00:09:22.049 --rc geninfo_unexecuted_blocks=1 00:09:22.049 00:09:22.049 ' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.049 --rc genhtml_branch_coverage=1 00:09:22.049 --rc genhtml_function_coverage=1 00:09:22.049 --rc genhtml_legend=1 00:09:22.049 --rc geninfo_all_blocks=1 00:09:22.049 --rc geninfo_unexecuted_blocks=1 00:09:22.049 00:09:22.049 ' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.049 --rc genhtml_branch_coverage=1 00:09:22.049 --rc genhtml_function_coverage=1 00:09:22.049 --rc genhtml_legend=1 00:09:22.049 --rc geninfo_all_blocks=1 00:09:22.049 --rc geninfo_unexecuted_blocks=1 00:09:22.049 00:09:22.049 ' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.049 --rc genhtml_branch_coverage=1 00:09:22.049 --rc genhtml_function_coverage=1 00:09:22.049 --rc genhtml_legend=1 00:09:22.049 --rc geninfo_all_blocks=1 00:09:22.049 --rc geninfo_unexecuted_blocks=1 00:09:22.049 00:09:22.049 ' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.049 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.050 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.050 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.050 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:09:28.621 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:09:28.621 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:09:28.621 Found net devices under 0000:1a:00.0: cvl_0_0 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:09:28.621 Found net devices under 0000:1a:00.1: cvl_0_1 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:28.621 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:09:28.622 00:09:28.622 --- 10.0.0.2 ping statistics --- 00:09:28.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.622 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:09:28.622 00:09:28.622 --- 10.0.0.1 ping statistics --- 00:09:28.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.622 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=773218 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 773218 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 773218 ']' 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.622 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.622 [2024-11-20 12:23:33.757865] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:28.622 [2024-11-20 12:23:33.757912] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.622 [2024-11-20 12:23:33.836660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.622 [2024-11-20 12:23:33.875396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.622 [2024-11-20 12:23:33.875434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.622 [2024-11-20 12:23:33.875441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.622 [2024-11-20 12:23:33.875447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.622 [2024-11-20 12:23:33.875452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.622 [2024-11-20 12:23:33.876028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.916 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.176 [2024-11-20 12:23:34.762862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:29.176 ************************************ 00:09:29.176 START TEST lvs_grow_clean 00:09:29.176 ************************************ 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.176 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.436 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.436 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.696 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:29.696 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:29.696 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.696 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.696 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.696 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 lvol 150 00:09:29.955 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=63ac8b13-1e65-41fa-94cc-ef4a4d3e472d 00:09:29.955 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.955 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:30.215 [2024-11-20 12:23:35.737172] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:30.215 [2024-11-20 12:23:35.737217] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:30.215 true 00:09:30.215 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:30.215 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.215 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.215 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.474 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63ac8b13-1e65-41fa-94cc-ef4a4d3e472d 00:09:30.733 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:30.733 [2024-11-20 12:23:36.431253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.733 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=773789 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 773789 /var/tmp/bdevperf.sock 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 773789 ']' 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.992 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.992 [2024-11-20 12:23:36.663750] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:30.992 [2024-11-20 12:23:36.663795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773789 ] 00:09:30.992 [2024-11-20 12:23:36.738012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.251 [2024-11-20 12:23:36.778234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.820 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.820 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:31.820 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:32.079 Nvme0n1 00:09:32.079 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:32.338 [ 00:09:32.338 { 00:09:32.338 "name": "Nvme0n1", 00:09:32.338 "aliases": [ 00:09:32.338 "63ac8b13-1e65-41fa-94cc-ef4a4d3e472d" 00:09:32.338 ], 00:09:32.338 "product_name": "NVMe disk", 00:09:32.338 "block_size": 4096, 00:09:32.338 "num_blocks": 38912, 00:09:32.338 "uuid": "63ac8b13-1e65-41fa-94cc-ef4a4d3e472d", 00:09:32.338 "numa_id": 0, 00:09:32.338 "assigned_rate_limits": { 00:09:32.338 "rw_ios_per_sec": 0, 00:09:32.338 "rw_mbytes_per_sec": 0, 00:09:32.338 "r_mbytes_per_sec": 0, 00:09:32.338 "w_mbytes_per_sec": 0 00:09:32.338 }, 00:09:32.338 "claimed": false, 00:09:32.338 "zoned": false, 00:09:32.338 "supported_io_types": { 00:09:32.338 "read": true, 00:09:32.338 "write": true, 00:09:32.338 "unmap": true, 00:09:32.338 "flush": true, 00:09:32.339 "reset": true, 00:09:32.339 "nvme_admin": true, 00:09:32.339 "nvme_io": true, 00:09:32.339 "nvme_io_md": false, 00:09:32.339 "write_zeroes": true, 00:09:32.339 "zcopy": false, 00:09:32.339 "get_zone_info": false, 00:09:32.339 "zone_management": false, 00:09:32.339 "zone_append": false, 00:09:32.339 "compare": true, 00:09:32.339 "compare_and_write": true, 00:09:32.339 "abort": true, 00:09:32.339 "seek_hole": false, 00:09:32.339 "seek_data": false, 00:09:32.339 "copy": true, 00:09:32.339 "nvme_iov_md": false 00:09:32.339 }, 00:09:32.339 "memory_domains": [ 00:09:32.339 { 00:09:32.339 "dma_device_id": "system", 00:09:32.339 "dma_device_type": 1 00:09:32.339 } 00:09:32.339 ], 00:09:32.339 "driver_specific": { 00:09:32.339 "nvme": [ 00:09:32.339 { 00:09:32.339 "trid": { 00:09:32.339 "trtype": "TCP", 00:09:32.339 "adrfam": "IPv4", 00:09:32.339 "traddr": "10.0.0.2", 00:09:32.339 "trsvcid": "4420", 00:09:32.339 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:32.339 }, 00:09:32.339 "ctrlr_data": { 00:09:32.339 "cntlid": 1, 00:09:32.339 "vendor_id": "0x8086", 00:09:32.339 "model_number": "SPDK bdev Controller", 00:09:32.339 "serial_number": "SPDK0", 00:09:32.339 "firmware_revision": "25.01", 00:09:32.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.339 "oacs": { 00:09:32.339 "security": 0, 00:09:32.339 "format": 0, 00:09:32.339 "firmware": 0, 00:09:32.339 "ns_manage": 0 00:09:32.339 }, 00:09:32.339 "multi_ctrlr": true, 00:09:32.339 "ana_reporting": false 00:09:32.339 }, 00:09:32.339 "vs": { 00:09:32.339 "nvme_version": "1.3" 00:09:32.339 }, 00:09:32.339 "ns_data": { 00:09:32.339 "id": 1, 00:09:32.339 "can_share": true 00:09:32.339 } 00:09:32.339 } 00:09:32.339 ], 00:09:32.339 "mp_policy": "active_passive" 00:09:32.339 } 00:09:32.339 } 00:09:32.339 ] 00:09:32.339 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=774058 00:09:32.339 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:32.339 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:32.339 Running I/O for 10 seconds... 00:09:33.277 Latency(us) 00:09:33.277 [2024-11-20T11:23:39.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.277 Nvme0n1 : 1.00 25275.00 98.73 0.00 0.00 0.00 0.00 0.00 00:09:33.278 [2024-11-20T11:23:39.042Z] =================================================================================================================== 00:09:33.278 [2024-11-20T11:23:39.042Z] Total : 25275.00 98.73 0.00 0.00 0.00 0.00 0.00 00:09:33.278 00:09:34.218 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:34.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.478 Nvme0n1 : 2.00 25482.00 99.54 0.00 0.00 0.00 0.00 0.00 00:09:34.478 [2024-11-20T11:23:40.242Z] =================================================================================================================== 00:09:34.478 [2024-11-20T11:23:40.242Z] Total : 25482.00 99.54 0.00 0.00 0.00 0.00 0.00 00:09:34.478 00:09:34.478 true 00:09:34.478 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:34.478 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:34.738 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:34.738 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:34.738 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 774058 00:09:35.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.307 Nvme0n1 : 3.00 25521.00 99.69 0.00 0.00 0.00 0.00 0.00 00:09:35.307 [2024-11-20T11:23:41.071Z] =================================================================================================================== 00:09:35.307 [2024-11-20T11:23:41.071Z] Total : 25521.00 99.69 0.00 0.00 0.00 0.00 0.00 00:09:35.307 00:09:36.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.688 Nvme0n1 : 4.00 25586.75 99.95 0.00 0.00 0.00 0.00 0.00 00:09:36.688 [2024-11-20T11:23:42.452Z] =================================================================================================================== 00:09:36.688 [2024-11-20T11:23:42.452Z] Total : 25586.75 99.95 0.00 0.00 0.00 0.00 0.00 00:09:36.688 00:09:37.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.627 Nvme0n1 : 5.00 25629.40 100.11 0.00 0.00 0.00 0.00 0.00 00:09:37.627 [2024-11-20T11:23:43.391Z] =================================================================================================================== 00:09:37.627 [2024-11-20T11:23:43.391Z] Total : 25629.40 100.11 0.00 0.00 0.00 0.00 0.00 00:09:37.627 00:09:38.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.566 Nvme0n1 : 6.00 25633.50 100.13 0.00 0.00 0.00 0.00 0.00 00:09:38.566 [2024-11-20T11:23:44.330Z] =================================================================================================================== 00:09:38.566 [2024-11-20T11:23:44.330Z] Total : 25633.50 100.13 0.00 0.00 0.00 0.00 0.00 00:09:38.566 00:09:39.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.504 Nvme0n1 : 7.00 25663.86 100.25 0.00 0.00 0.00 0.00 0.00 00:09:39.504 [2024-11-20T11:23:45.268Z] =================================================================================================================== 00:09:39.504 [2024-11-20T11:23:45.268Z] Total : 25663.86 100.25 0.00 0.00 0.00 0.00 0.00 00:09:39.504 00:09:40.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.444 Nvme0n1 : 8.00 25702.50 100.40 0.00 0.00 0.00 0.00 0.00 00:09:40.444 [2024-11-20T11:23:46.208Z] =================================================================================================================== 00:09:40.444 [2024-11-20T11:23:46.208Z] Total : 25702.50 100.40 0.00 0.00 0.00 0.00 0.00 00:09:40.444 00:09:41.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.383 Nvme0n1 : 9.00 25725.33 100.49 0.00 0.00 0.00 0.00 0.00 00:09:41.383 [2024-11-20T11:23:47.147Z] =================================================================================================================== 00:09:41.383 [2024-11-20T11:23:47.147Z] Total : 25725.33 100.49 0.00 0.00 0.00 0.00 0.00 00:09:41.383 00:09:42.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.418 Nvme0n1 : 10.00 25740.80 100.55 0.00 0.00 0.00 0.00 0.00 00:09:42.418 [2024-11-20T11:23:48.182Z] =================================================================================================================== 00:09:42.418 [2024-11-20T11:23:48.182Z] Total : 25740.80 100.55 0.00 0.00 0.00 0.00 0.00 00:09:42.418 00:09:42.418 00:09:42.418 Latency(us) 00:09:42.418 [2024-11-20T11:23:48.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.418 Nvme0n1 : 10.00 25741.13 100.55 0.00 0.00 4969.83 2412.92 11260.28 00:09:42.418 [2024-11-20T11:23:48.182Z] =================================================================================================================== 00:09:42.418 [2024-11-20T11:23:48.182Z] Total : 25741.13 100.55 0.00 0.00 4969.83 2412.92 11260.28 00:09:42.418 { 00:09:42.418 "results": [ 00:09:42.418 { 00:09:42.418 "job": "Nvme0n1", 00:09:42.418 "core_mask": "0x2", 00:09:42.418 "workload": "randwrite", 00:09:42.418 "status": "finished", 00:09:42.418 "queue_depth": 128, 00:09:42.418 "io_size": 4096, 00:09:42.418 "runtime": 10.003522, 00:09:42.418 "iops": 25741.133972614844, 00:09:42.418 "mibps": 100.55130458052673, 00:09:42.418 "io_failed": 0, 00:09:42.418 "io_timeout": 0, 00:09:42.418 "avg_latency_us": 4969.831409097617, 00:09:42.418 "min_latency_us": 2412.9163636363637, 00:09:42.418 "max_latency_us": 11260.276363636363 00:09:42.418 } 00:09:42.418 ], 00:09:42.418 "core_count": 1 00:09:42.418 } 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 773789 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 773789 ']' 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 773789 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773789 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773789' 00:09:42.418 killing process with pid 773789 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 773789 00:09:42.418 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.418 00:09:42.418 Latency(us) 00:09:42.418 [2024-11-20T11:23:48.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.418 [2024-11-20T11:23:48.182Z] =================================================================================================================== 00:09:42.418 [2024-11-20T11:23:48.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.418 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 773789 00:09:42.707 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.983 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:42.983 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:42.983 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.266 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:43.266 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:43.266 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.266 [2024-11-20 12:23:49.022108] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:43.525 request: 00:09:43.525 { 00:09:43.525 "uuid": "a71352a3-bfe1-400f-bad7-1ad0bf231f39", 00:09:43.525 "method": "bdev_lvol_get_lvstores", 00:09:43.525 "req_id": 1 00:09:43.525 } 00:09:43.525 Got JSON-RPC error response 00:09:43.525 response: 00:09:43.525 { 00:09:43.525 "code": -19, 00:09:43.525 "message": "No such device" 00:09:43.525 } 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.525 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.784 aio_bdev 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 63ac8b13-1e65-41fa-94cc-ef4a4d3e472d 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=63ac8b13-1e65-41fa-94cc-ef4a4d3e472d 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.784 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.042 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 63ac8b13-1e65-41fa-94cc-ef4a4d3e472d -t 2000 00:09:44.042 [ 00:09:44.042 { 00:09:44.042 "name": "63ac8b13-1e65-41fa-94cc-ef4a4d3e472d", 00:09:44.043 "aliases": [ 00:09:44.043 "lvs/lvol" 00:09:44.043 ], 00:09:44.043 "product_name": "Logical Volume", 00:09:44.043 "block_size": 4096, 00:09:44.043 "num_blocks": 38912, 00:09:44.043 "uuid": "63ac8b13-1e65-41fa-94cc-ef4a4d3e472d", 00:09:44.043 "assigned_rate_limits": { 00:09:44.043 "rw_ios_per_sec": 0, 00:09:44.043 "rw_mbytes_per_sec": 0, 00:09:44.043 "r_mbytes_per_sec": 0, 00:09:44.043 "w_mbytes_per_sec": 0 00:09:44.043 }, 00:09:44.043 "claimed": false, 00:09:44.043 "zoned": false, 00:09:44.043 "supported_io_types": { 00:09:44.043 "read": true, 00:09:44.043 "write": true, 00:09:44.043 "unmap": true, 00:09:44.043 "flush": false, 00:09:44.043 "reset": true, 00:09:44.043 "nvme_admin": false, 00:09:44.043 "nvme_io": false, 00:09:44.043 "nvme_io_md": false, 00:09:44.043 "write_zeroes": true, 00:09:44.043 "zcopy": false, 00:09:44.043 "get_zone_info": false, 00:09:44.043 "zone_management": false, 00:09:44.043 "zone_append": false, 00:09:44.043 "compare": false, 00:09:44.043 "compare_and_write": false, 00:09:44.043 "abort": false, 00:09:44.043 "seek_hole": true, 00:09:44.043 "seek_data": true, 00:09:44.043 "copy": false, 00:09:44.043 "nvme_iov_md": false 00:09:44.043 }, 00:09:44.043 "driver_specific": { 00:09:44.043 "lvol": { 00:09:44.043 "lvol_store_uuid": "a71352a3-bfe1-400f-bad7-1ad0bf231f39", 00:09:44.043 "base_bdev": "aio_bdev", 00:09:44.043 "thin_provision": false, 00:09:44.043 "num_allocated_clusters": 38, 00:09:44.043 "snapshot": false, 00:09:44.043 "clone": false, 00:09:44.043 "esnap_clone": false 00:09:44.043 } 00:09:44.043 } 00:09:44.043 } 00:09:44.043 ] 00:09:44.043 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:44.043 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:44.043 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.301 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.301 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:44.301 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.560 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.560 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 63ac8b13-1e65-41fa-94cc-ef4a4d3e472d 00:09:44.560 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a71352a3-bfe1-400f-bad7-1ad0bf231f39 00:09:44.819 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.078 00:09:45.078 real 0m15.862s 00:09:45.078 user 0m15.557s 00:09:45.078 sys 0m1.403s 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:45.078 ************************************ 00:09:45.078 END TEST lvs_grow_clean 00:09:45.078 ************************************ 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.078 ************************************ 00:09:45.078 START TEST lvs_grow_dirty 00:09:45.078 ************************************ 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.078 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.337 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:45.337 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:45.597 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:09:45.597 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:09:45.597 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:45.597 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:45.597 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:45.597 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 lvol 150 00:09:45.856 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:09:45.856 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.856 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:46.116 [2024-11-20 12:23:51.646161] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:46.116 [2024-11-20 12:23:51.646207] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:46.116 true 00:09:46.116 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:09:46.116 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:46.116 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:46.116 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:46.376 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:09:46.635 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.635 [2024-11-20 12:23:52.324148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.635 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=776761 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 776761 /var/tmp/bdevperf.sock 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 776761 ']' 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.894 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.894 [2024-11-20 12:23:52.542438] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:46.894 [2024-11-20 12:23:52.542480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776761 ] 00:09:46.894 [2024-11-20 12:23:52.612433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.894 [2024-11-20 12:23:52.650851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.831 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.831 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:47.831 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:48.091 Nvme0n1 00:09:48.091 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:48.091 [ 00:09:48.091 { 00:09:48.091 "name": "Nvme0n1", 00:09:48.091 "aliases": [ 00:09:48.091 "b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8" 00:09:48.091 ], 00:09:48.091 "product_name": "NVMe disk", 00:09:48.091 "block_size": 4096, 00:09:48.091 "num_blocks": 38912, 00:09:48.091 "uuid": "b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8", 00:09:48.091 "numa_id": 0, 00:09:48.091 "assigned_rate_limits": { 00:09:48.091 "rw_ios_per_sec": 0, 00:09:48.091 "rw_mbytes_per_sec": 0, 00:09:48.091 "r_mbytes_per_sec": 0, 00:09:48.091 "w_mbytes_per_sec": 0 00:09:48.091 }, 00:09:48.091 "claimed": false, 00:09:48.091 "zoned": false, 00:09:48.091 "supported_io_types": { 00:09:48.091 "read": true, 00:09:48.091 "write": true, 00:09:48.091 "unmap": true, 00:09:48.091 "flush": true, 00:09:48.091 "reset": true, 00:09:48.091 "nvme_admin": true, 00:09:48.091 "nvme_io": true, 00:09:48.091 "nvme_io_md": false, 00:09:48.091 "write_zeroes": true, 00:09:48.091 "zcopy": false, 00:09:48.091 "get_zone_info": false, 00:09:48.091 "zone_management": false, 00:09:48.091 "zone_append": false, 00:09:48.091 "compare": true, 00:09:48.091 "compare_and_write": true, 00:09:48.091 "abort": true, 00:09:48.091 "seek_hole": false, 00:09:48.091 "seek_data": false, 00:09:48.091 "copy": true, 00:09:48.091 "nvme_iov_md": false 00:09:48.091 }, 00:09:48.091 "memory_domains": [ 00:09:48.091 { 00:09:48.091 "dma_device_id": "system", 00:09:48.091 "dma_device_type": 1 00:09:48.091 } 00:09:48.091 ], 00:09:48.091 "driver_specific": { 00:09:48.091 "nvme": [ 00:09:48.091 { 00:09:48.091 "trid": { 00:09:48.091 "trtype": "TCP", 00:09:48.091 "adrfam": "IPv4", 00:09:48.091 "traddr": "10.0.0.2", 00:09:48.091 "trsvcid": "4420", 00:09:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:48.091 }, 00:09:48.091 "ctrlr_data": { 00:09:48.091 "cntlid": 1, 00:09:48.091 "vendor_id": "0x8086", 00:09:48.091 "model_number": "SPDK bdev Controller", 00:09:48.091 "serial_number": "SPDK0", 00:09:48.091 "firmware_revision": "25.01", 00:09:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.091 "oacs": { 00:09:48.091 "security": 0, 00:09:48.091 "format": 0, 00:09:48.091 "firmware": 0, 00:09:48.091 "ns_manage": 0 00:09:48.091 }, 00:09:48.091 "multi_ctrlr": true, 00:09:48.091 "ana_reporting": false 00:09:48.091 }, 00:09:48.091 "vs": { 00:09:48.091 "nvme_version": "1.3" 00:09:48.091 }, 00:09:48.091 "ns_data": { 00:09:48.091 "id": 1, 00:09:48.091 "can_share": true 00:09:48.091 } 00:09:48.091 } 00:09:48.091 ], 00:09:48.091 "mp_policy": "active_passive" 00:09:48.091 } 00:09:48.091 } 00:09:48.091 ] 00:09:48.091 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=777025 00:09:48.091 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:48.091 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:48.351 Running I/O for 10 seconds... 00:09:49.288 Latency(us) 00:09:49.288 [2024-11-20T11:23:55.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.288 Nvme0n1 : 1.00 25057.00 97.88 0.00 0.00 0.00 0.00 0.00 00:09:49.288 [2024-11-20T11:23:55.052Z] =================================================================================================================== 00:09:49.288 [2024-11-20T11:23:55.052Z] Total : 25057.00 97.88 0.00 0.00 0.00 0.00 0.00 00:09:49.288 00:09:50.225 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:09:50.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.225 Nvme0n1 : 2.00 25355.50 99.04 0.00 0.00 0.00 0.00 0.00 00:09:50.225 [2024-11-20T11:23:55.989Z] =================================================================================================================== 00:09:50.225 [2024-11-20T11:23:55.989Z] Total : 25355.50 99.04 0.00 0.00 0.00 0.00 0.00 00:09:50.225 00:09:50.225 true 00:09:50.485 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:09:50.485 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:50.485 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:50.485 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:50.485 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 777025 00:09:51.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.423 Nvme0n1 : 3.00 25425.00 99.32 0.00 0.00 0.00 0.00 0.00 00:09:51.423 [2024-11-20T11:23:57.187Z] =================================================================================================================== 00:09:51.423 [2024-11-20T11:23:57.187Z] Total : 25425.00 99.32 0.00 0.00 0.00 0.00 0.00 00:09:51.423 00:09:52.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.361 Nvme0n1 : 4.00 25505.50 99.63 0.00 0.00 0.00 0.00 0.00 00:09:52.361 [2024-11-20T11:23:58.125Z] =================================================================================================================== 00:09:52.361 [2024-11-20T11:23:58.125Z] Total : 25505.50 99.63 0.00 0.00 0.00 0.00 0.00 00:09:52.361 00:09:53.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.299 Nvme0n1 : 5.00 25548.00 99.80 0.00 0.00 0.00 0.00 0.00 00:09:53.299 [2024-11-20T11:23:59.063Z] =================================================================================================================== 00:09:53.299 [2024-11-20T11:23:59.063Z] Total : 25548.00 99.80 0.00 0.00 0.00 0.00 0.00 00:09:53.299 00:09:54.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.236 Nvme0n1 : 6.00 25576.83 99.91 0.00 0.00 0.00 0.00 0.00 00:09:54.236 [2024-11-20T11:24:00.000Z] =================================================================================================================== 00:09:54.236 [2024-11-20T11:24:00.000Z] Total : 25576.83 99.91 0.00 0.00 0.00 0.00 0.00 00:09:54.236 00:09:55.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.172 Nvme0n1 : 7.00 25569.71 99.88 0.00 0.00 0.00 0.00 0.00 00:09:55.172 [2024-11-20T11:24:00.936Z] =================================================================================================================== 00:09:55.172 [2024-11-20T11:24:00.936Z] Total : 25569.71 99.88 0.00 0.00 0.00 0.00 0.00 00:09:55.172 00:09:56.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.548 Nvme0n1 : 8.00 25586.38 99.95 0.00 0.00 0.00 0.00 0.00 00:09:56.548 [2024-11-20T11:24:02.312Z] =================================================================================================================== 00:09:56.548 [2024-11-20T11:24:02.312Z] Total : 25586.38 99.95 0.00 0.00 0.00 0.00 0.00 00:09:56.548 00:09:57.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.486 Nvme0n1 : 9.00 25605.56 100.02 0.00 0.00 0.00 0.00 0.00 00:09:57.486 [2024-11-20T11:24:03.250Z] =================================================================================================================== 00:09:57.486 [2024-11-20T11:24:03.250Z] Total : 25605.56 100.02 0.00 0.00 0.00 0.00 0.00 00:09:57.486 00:09:58.424 00:09:58.424 Latency(us) 00:09:58.424 [2024-11-20T11:24:04.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.424 Nvme0n1 : 10.00 25619.25 100.08 0.00 0.00 4993.58 2919.33 14417.92 00:09:58.424 [2024-11-20T11:24:04.188Z] =================================================================================================================== 00:09:58.424 [2024-11-20T11:24:04.188Z] Total : 25619.25 100.08 0.00 0.00 4993.58 2919.33 14417.92 00:09:58.424 { 00:09:58.424 "results": [ 00:09:58.424 { 00:09:58.424 "job": "Nvme0n1", 00:09:58.424 "core_mask": "0x2", 00:09:58.424 "workload": "randwrite", 00:09:58.424 "status": "finished", 00:09:58.424 "queue_depth": 128, 00:09:58.424 "io_size": 4096, 00:09:58.424 "runtime": 10.001542, 00:09:58.424 "iops": 25619.24951172529, 00:09:58.424 "mibps": 100.07519340517692, 00:09:58.424 "io_failed": 0, 00:09:58.424 "io_timeout": 0, 00:09:58.424 "avg_latency_us": 4993.583510398246, 00:09:58.424 "min_latency_us": 2919.3309090909092, 00:09:58.424 "max_latency_us": 14417.92 00:09:58.424 } 00:09:58.424 ], 00:09:58.424 "core_count": 1 00:09:58.424 } 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 776761 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 776761 ']' 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 776761 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776761 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776761' 00:09:58.424 killing process with pid 776761 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 776761 00:09:58.424 Received shutdown signal, test time was about 10.000000 seconds 00:09:58.424 00:09:58.424 Latency(us) 00:09:58.424 [2024-11-20T11:24:04.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.424 [2024-11-20T11:24:04.188Z] =================================================================================================================== 00:09:58.424 [2024-11-20T11:24:04.188Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:58.424 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 776761 00:09:58.424 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.683 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.942 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:09:58.942 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 773218 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 773218 00:09:59.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 773218 Killed "${NVMF_APP[@]}" "$@" 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=779067 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 779067 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 779067 ']' 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.201 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.201 [2024-11-20 12:24:04.819533] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:59.201 [2024-11-20 12:24:04.819578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.202 [2024-11-20 12:24:04.897430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.202 [2024-11-20 12:24:04.937066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.202 [2024-11-20 12:24:04.937102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.202 [2024-11-20 12:24:04.937109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.202 [2024-11-20 12:24:04.937114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.202 [2024-11-20 12:24:04.937119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.202 [2024-11-20 12:24:04.937715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.139 [2024-11-20 12:24:05.819506] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:00.139 [2024-11-20 12:24:05.819578] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:00.139 [2024-11-20 12:24:05.819602] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.139 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.398 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 -t 2000 00:10:00.657 [ 00:10:00.657 { 00:10:00.657 "name": "b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8", 00:10:00.657 "aliases": [ 00:10:00.657 "lvs/lvol" 00:10:00.657 ], 00:10:00.657 "product_name": "Logical Volume", 00:10:00.657 "block_size": 4096, 00:10:00.657 "num_blocks": 38912, 00:10:00.657 "uuid": "b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8", 00:10:00.657 "assigned_rate_limits": { 00:10:00.657 "rw_ios_per_sec": 0, 00:10:00.657 "rw_mbytes_per_sec": 0, 00:10:00.657 "r_mbytes_per_sec": 0, 00:10:00.657 "w_mbytes_per_sec": 0 00:10:00.657 }, 00:10:00.657 "claimed": false, 00:10:00.657 "zoned": false, 00:10:00.657 "supported_io_types": { 00:10:00.657 "read": true, 00:10:00.657 "write": true, 00:10:00.657 "unmap": true, 00:10:00.657 "flush": false, 00:10:00.657 "reset": true, 00:10:00.657 "nvme_admin": false, 00:10:00.657 "nvme_io": false, 00:10:00.657 "nvme_io_md": false, 00:10:00.657 "write_zeroes": true, 00:10:00.657 "zcopy": false, 00:10:00.657 "get_zone_info": false, 00:10:00.657 "zone_management": false, 00:10:00.657 "zone_append": false, 00:10:00.657 "compare": false, 00:10:00.657 "compare_and_write": false, 00:10:00.657 "abort": false, 00:10:00.657 "seek_hole": true, 00:10:00.657 "seek_data": true, 00:10:00.657 "copy": false, 00:10:00.657 "nvme_iov_md": false 00:10:00.657 }, 00:10:00.657 "driver_specific": { 00:10:00.657 "lvol": { 00:10:00.657 "lvol_store_uuid": "e941b31c-e9c2-45a5-ab22-e571dee2c3d0", 00:10:00.657 "base_bdev": "aio_bdev", 00:10:00.657 "thin_provision": false, 00:10:00.657 "num_allocated_clusters": 38, 00:10:00.657 "snapshot": false, 00:10:00.657 "clone": false, 00:10:00.657 "esnap_clone": false 00:10:00.657 } 00:10:00.657 } 00:10:00.657 } 00:10:00.657 ] 00:10:00.657 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:00.657 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:00.657 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:00.657 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:00.657 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:00.657 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:00.917 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:00.917 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.176 [2024-11-20 12:24:06.716370] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:01.176 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:01.435 request: 00:10:01.435 { 00:10:01.435 "uuid": "e941b31c-e9c2-45a5-ab22-e571dee2c3d0", 00:10:01.435 "method": "bdev_lvol_get_lvstores", 00:10:01.435 "req_id": 1 00:10:01.435 } 00:10:01.435 Got JSON-RPC error response 00:10:01.435 response: 00:10:01.435 { 00:10:01.435 "code": -19, 00:10:01.435 "message": "No such device" 00:10:01.435 } 00:10:01.435 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:01.435 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:01.435 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:01.435 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:01.435 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.435 aio_bdev 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.436 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.694 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 -t 2000 00:10:01.954 [ 00:10:01.954 { 00:10:01.954 "name": "b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8", 00:10:01.954 "aliases": [ 00:10:01.954 "lvs/lvol" 00:10:01.954 ], 00:10:01.954 "product_name": "Logical Volume", 00:10:01.954 "block_size": 4096, 00:10:01.954 "num_blocks": 38912, 00:10:01.954 "uuid": "b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8", 00:10:01.954 "assigned_rate_limits": { 00:10:01.954 "rw_ios_per_sec": 0, 00:10:01.954 "rw_mbytes_per_sec": 0, 00:10:01.954 "r_mbytes_per_sec": 0, 00:10:01.954 "w_mbytes_per_sec": 0 00:10:01.954 }, 00:10:01.954 "claimed": false, 00:10:01.954 "zoned": false, 00:10:01.954 "supported_io_types": { 00:10:01.954 "read": true, 00:10:01.954 "write": true, 00:10:01.954 "unmap": true, 00:10:01.954 "flush": false, 00:10:01.954 "reset": true, 00:10:01.954 "nvme_admin": false, 00:10:01.954 "nvme_io": false, 00:10:01.954 "nvme_io_md": false, 00:10:01.954 "write_zeroes": true, 00:10:01.954 "zcopy": false, 00:10:01.954 "get_zone_info": false, 00:10:01.954 "zone_management": false, 00:10:01.954 "zone_append": false, 00:10:01.954 "compare": false, 00:10:01.954 "compare_and_write": false, 00:10:01.954 "abort": false, 00:10:01.954 "seek_hole": true, 00:10:01.954 "seek_data": true, 00:10:01.954 "copy": false, 00:10:01.954 "nvme_iov_md": false 00:10:01.954 }, 00:10:01.954 "driver_specific": { 00:10:01.954 "lvol": { 00:10:01.954 "lvol_store_uuid": "e941b31c-e9c2-45a5-ab22-e571dee2c3d0", 00:10:01.954 "base_bdev": "aio_bdev", 00:10:01.954 "thin_provision": false, 00:10:01.954 "num_allocated_clusters": 38, 00:10:01.954 "snapshot": false, 00:10:01.954 "clone": false, 00:10:01.954 "esnap_clone": false 00:10:01.954 } 00:10:01.954 } 00:10:01.954 } 00:10:01.954 ] 00:10:01.954 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:01.954 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:01.954 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:01.954 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:01.954 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:01.954 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:02.213 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:02.213 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4e5f465-8d8b-4be5-ae20-78e6ad0a36f8 00:10:02.472 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e941b31c-e9c2-45a5-ab22-e571dee2c3d0 00:10:02.472 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:02.731 00:10:02.731 real 0m17.635s 00:10:02.731 user 0m45.614s 00:10:02.731 sys 0m3.392s 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 ************************************ 00:10:02.731 END TEST lvs_grow_dirty 00:10:02.731 ************************************ 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:02.731 nvmf_trace.0 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.731 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.731 rmmod nvme_tcp 00:10:02.990 rmmod nvme_fabrics 00:10:02.990 rmmod nvme_keyring 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 779067 ']' 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 779067 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 779067 ']' 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 779067 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779067 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779067' 00:10:02.990 killing process with pid 779067 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 779067 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 779067 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.990 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.991 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.991 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:02.991 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:02.991 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.991 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.249 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.249 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.249 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.249 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.249 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.156 00:10:05.156 real 0m43.502s 00:10:05.156 user 1m7.346s 00:10:05.156 sys 0m9.873s 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.156 ************************************ 00:10:05.156 END TEST nvmf_lvs_grow 00:10:05.156 ************************************ 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.156 ************************************ 00:10:05.156 START TEST nvmf_bdev_io_wait 00:10:05.156 ************************************ 00:10:05.156 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:05.416 * Looking for test storage... 00:10:05.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.416 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.416 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.416 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.416 --rc genhtml_branch_coverage=1 00:10:05.416 --rc genhtml_function_coverage=1 00:10:05.416 --rc genhtml_legend=1 00:10:05.416 --rc geninfo_all_blocks=1 00:10:05.416 --rc geninfo_unexecuted_blocks=1 00:10:05.416 00:10:05.416 ' 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.416 --rc genhtml_branch_coverage=1 00:10:05.416 --rc genhtml_function_coverage=1 00:10:05.416 --rc genhtml_legend=1 00:10:05.416 --rc geninfo_all_blocks=1 00:10:05.416 --rc geninfo_unexecuted_blocks=1 00:10:05.416 00:10:05.416 ' 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.416 --rc genhtml_branch_coverage=1 00:10:05.416 --rc genhtml_function_coverage=1 00:10:05.416 --rc genhtml_legend=1 00:10:05.416 --rc geninfo_all_blocks=1 00:10:05.416 --rc geninfo_unexecuted_blocks=1 00:10:05.416 00:10:05.416 ' 00:10:05.416 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.416 --rc genhtml_branch_coverage=1 00:10:05.416 --rc genhtml_function_coverage=1 00:10:05.416 --rc genhtml_legend=1 00:10:05.416 --rc geninfo_all_blocks=1 00:10:05.416 --rc geninfo_unexecuted_blocks=1 00:10:05.416 00:10:05.416 ' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.417 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:10:11.990 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.990 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:10:11.990 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:10:11.991 Found net devices under 0000:1a:00.0: cvl_0_0 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:10:11.991 Found net devices under 0000:1a:00.1: cvl_0_1 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:10:11.991 00:10:11.991 --- 10.0.0.2 ping statistics --- 00:10:11.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.991 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:10:11.991 00:10:11.991 --- 10.0.0.1 ping statistics --- 00:10:11.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.991 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=784026 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 784026 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 784026 ']' 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.991 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 [2024-11-20 12:24:17.402494] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:11.991 [2024-11-20 12:24:17.402534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.991 [2024-11-20 12:24:17.476080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.991 [2024-11-20 12:24:17.516459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.991 [2024-11-20 12:24:17.516496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.991 [2024-11-20 12:24:17.516503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.991 [2024-11-20 12:24:17.516508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.991 [2024-11-20 12:24:17.516513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.991 [2024-11-20 12:24:17.517987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.991 [2024-11-20 12:24:17.518102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.991 [2024-11-20 12:24:17.518216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.991 [2024-11-20 12:24:17.518216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.557 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.816 [2024-11-20 12:24:18.330431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.816 Malloc0 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.816 [2024-11-20 12:24:18.384966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=784306 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=784308 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:12.816 { 00:10:12.816 "params": { 00:10:12.816 "name": "Nvme$subsystem", 00:10:12.816 "trtype": "$TEST_TRANSPORT", 00:10:12.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.816 "adrfam": "ipv4", 00:10:12.816 "trsvcid": "$NVMF_PORT", 00:10:12.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.816 "hdgst": ${hdgst:-false}, 00:10:12.816 "ddgst": ${ddgst:-false} 00:10:12.816 }, 00:10:12.816 "method": "bdev_nvme_attach_controller" 00:10:12.816 } 00:10:12.816 EOF 00:10:12.816 )") 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=784310 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=784313 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:12.816 { 00:10:12.816 "params": { 00:10:12.816 "name": "Nvme$subsystem", 00:10:12.816 "trtype": "$TEST_TRANSPORT", 00:10:12.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.816 "adrfam": "ipv4", 00:10:12.816 "trsvcid": "$NVMF_PORT", 00:10:12.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.816 "hdgst": ${hdgst:-false}, 00:10:12.816 "ddgst": ${ddgst:-false} 00:10:12.816 }, 00:10:12.816 "method": "bdev_nvme_attach_controller" 00:10:12.816 } 00:10:12.816 EOF 00:10:12.816 )") 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:12.816 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:12.817 { 00:10:12.817 "params": { 00:10:12.817 "name": "Nvme$subsystem", 00:10:12.817 "trtype": "$TEST_TRANSPORT", 00:10:12.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.817 "adrfam": "ipv4", 00:10:12.817 "trsvcid": "$NVMF_PORT", 00:10:12.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.817 "hdgst": ${hdgst:-false}, 00:10:12.817 "ddgst": ${ddgst:-false} 00:10:12.817 }, 00:10:12.817 "method": "bdev_nvme_attach_controller" 00:10:12.817 } 00:10:12.817 EOF 00:10:12.817 )") 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:12.817 { 00:10:12.817 "params": { 00:10:12.817 "name": "Nvme$subsystem", 00:10:12.817 "trtype": "$TEST_TRANSPORT", 00:10:12.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.817 "adrfam": "ipv4", 00:10:12.817 "trsvcid": "$NVMF_PORT", 00:10:12.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.817 "hdgst": ${hdgst:-false}, 00:10:12.817 "ddgst": ${ddgst:-false} 00:10:12.817 }, 00:10:12.817 "method": "bdev_nvme_attach_controller" 00:10:12.817 } 00:10:12.817 EOF 00:10:12.817 )") 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 784306 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:12.817 "params": { 00:10:12.817 "name": "Nvme1", 00:10:12.817 "trtype": "tcp", 00:10:12.817 "traddr": "10.0.0.2", 00:10:12.817 "adrfam": "ipv4", 00:10:12.817 "trsvcid": "4420", 00:10:12.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.817 "hdgst": false, 00:10:12.817 "ddgst": false 00:10:12.817 }, 00:10:12.817 "method": "bdev_nvme_attach_controller" 00:10:12.817 }' 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:12.817 "params": { 00:10:12.817 "name": "Nvme1", 00:10:12.817 "trtype": "tcp", 00:10:12.817 "traddr": "10.0.0.2", 00:10:12.817 "adrfam": "ipv4", 00:10:12.817 "trsvcid": "4420", 00:10:12.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.817 "hdgst": false, 00:10:12.817 "ddgst": false 00:10:12.817 }, 00:10:12.817 "method": "bdev_nvme_attach_controller" 00:10:12.817 }' 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:12.817 "params": { 00:10:12.817 "name": "Nvme1", 00:10:12.817 "trtype": "tcp", 00:10:12.817 "traddr": "10.0.0.2", 00:10:12.817 "adrfam": "ipv4", 00:10:12.817 "trsvcid": "4420", 00:10:12.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.817 "hdgst": false, 00:10:12.817 "ddgst": false 00:10:12.817 }, 00:10:12.817 "method": "bdev_nvme_attach_controller" 00:10:12.817 }' 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:12.817 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:12.817 "params": { 00:10:12.817 "name": "Nvme1", 00:10:12.817 "trtype": "tcp", 00:10:12.817 "traddr": "10.0.0.2", 00:10:12.817 "adrfam": "ipv4", 00:10:12.817 "trsvcid": "4420", 00:10:12.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.817 "hdgst": false, 00:10:12.817 "ddgst": false 00:10:12.817 }, 00:10:12.817 "method": "bdev_nvme_attach_controller" 00:10:12.817 }' 00:10:12.817 [2024-11-20 12:24:18.434596] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:12.817 [2024-11-20 12:24:18.434646] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:12.817 [2024-11-20 12:24:18.436708] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:12.817 [2024-11-20 12:24:18.436709] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:12.817 [2024-11-20 12:24:18.436752] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 12:24:18.436753] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:12.817 --proc-type=auto ] 00:10:12.817 [2024-11-20 12:24:18.442242] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:12.817 [2024-11-20 12:24:18.442282] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:13.076 [2024-11-20 12:24:18.611127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.076 [2024-11-20 12:24:18.651229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:13.076 [2024-11-20 12:24:18.702550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.076 [2024-11-20 12:24:18.753726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.076 [2024-11-20 12:24:18.757569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:13.076 [2024-11-20 12:24:18.794447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:13.076 [2024-11-20 12:24:18.814265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.334 [2024-11-20 12:24:18.853813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:13.334 Running I/O for 1 seconds... 00:10:13.334 Running I/O for 1 seconds... 00:10:13.334 Running I/O for 1 seconds... 00:10:13.334 Running I/O for 1 seconds... 00:10:14.268 6963.00 IOPS, 27.20 MiB/s [2024-11-20T11:24:20.032Z] 249424.00 IOPS, 974.31 MiB/s 00:10:14.269 Latency(us) 00:10:14.269 [2024-11-20T11:24:20.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.269 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:14.269 Nvme1n1 : 1.00 249046.48 972.84 0.00 0.00 511.01 229.00 1489.45 00:10:14.269 [2024-11-20T11:24:20.033Z] =================================================================================================================== 00:10:14.269 [2024-11-20T11:24:20.033Z] Total : 249046.48 972.84 0.00 0.00 511.01 229.00 1489.45 00:10:14.269 00:10:14.269 Latency(us) 00:10:14.269 [2024-11-20T11:24:20.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.269 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:14.269 Nvme1n1 : 1.02 6945.40 27.13 0.00 0.00 18229.27 6404.65 28359.21 00:10:14.269 [2024-11-20T11:24:20.033Z] =================================================================================================================== 00:10:14.269 [2024-11-20T11:24:20.033Z] Total : 6945.40 27.13 0.00 0.00 18229.27 6404.65 28359.21 00:10:14.269 6372.00 IOPS, 24.89 MiB/s [2024-11-20T11:24:20.033Z] 14228.00 IOPS, 55.58 MiB/s 00:10:14.269 Latency(us) 00:10:14.269 [2024-11-20T11:24:20.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.269 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:14.269 Nvme1n1 : 1.01 6456.50 25.22 0.00 0.00 19750.48 6136.55 41943.04 00:10:14.269 [2024-11-20T11:24:20.033Z] =================================================================================================================== 00:10:14.269 [2024-11-20T11:24:20.033Z] Total : 6456.50 25.22 0.00 0.00 19750.48 6136.55 41943.04 00:10:14.269 00:10:14.269 Latency(us) 00:10:14.269 [2024-11-20T11:24:20.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.269 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:14.269 Nvme1n1 : 1.01 14282.92 55.79 0.00 0.00 8934.83 4557.73 19422.49 00:10:14.269 [2024-11-20T11:24:20.033Z] =================================================================================================================== 00:10:14.269 [2024-11-20T11:24:20.033Z] Total : 14282.92 55.79 0.00 0.00 8934.83 4557.73 19422.49 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 784308 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 784310 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 784313 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.570 rmmod nvme_tcp 00:10:14.570 rmmod nvme_fabrics 00:10:14.570 rmmod nvme_keyring 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 784026 ']' 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 784026 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 784026 ']' 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 784026 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784026 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784026' 00:10:14.570 killing process with pid 784026 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 784026 00:10:14.570 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 784026 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.879 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.786 00:10:16.786 real 0m11.551s 00:10:16.786 user 0m18.237s 00:10:16.786 sys 0m6.241s 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:16.786 ************************************ 00:10:16.786 END TEST nvmf_bdev_io_wait 00:10:16.786 ************************************ 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.786 ************************************ 00:10:16.786 START TEST nvmf_queue_depth 00:10:16.786 ************************************ 00:10:16.786 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:17.046 * Looking for test storage... 00:10:17.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:17.046 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.047 --rc genhtml_branch_coverage=1 00:10:17.047 --rc genhtml_function_coverage=1 00:10:17.047 --rc genhtml_legend=1 00:10:17.047 --rc geninfo_all_blocks=1 00:10:17.047 --rc geninfo_unexecuted_blocks=1 00:10:17.047 00:10:17.047 ' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.047 --rc genhtml_branch_coverage=1 00:10:17.047 --rc genhtml_function_coverage=1 00:10:17.047 --rc genhtml_legend=1 00:10:17.047 --rc geninfo_all_blocks=1 00:10:17.047 --rc geninfo_unexecuted_blocks=1 00:10:17.047 00:10:17.047 ' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.047 --rc genhtml_branch_coverage=1 00:10:17.047 --rc genhtml_function_coverage=1 00:10:17.047 --rc genhtml_legend=1 00:10:17.047 --rc geninfo_all_blocks=1 00:10:17.047 --rc geninfo_unexecuted_blocks=1 00:10:17.047 00:10:17.047 ' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.047 --rc genhtml_branch_coverage=1 00:10:17.047 --rc genhtml_function_coverage=1 00:10:17.047 --rc genhtml_legend=1 00:10:17.047 --rc geninfo_all_blocks=1 00:10:17.047 --rc geninfo_unexecuted_blocks=1 00:10:17.047 00:10:17.047 ' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.047 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.641 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:10:23.642 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:10:23.642 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:10:23.642 Found net devices under 0000:1a:00.0: cvl_0_0 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:10:23.642 Found net devices under 0000:1a:00.1: cvl_0_1 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:10:23.642 00:10:23.642 --- 10.0.0.2 ping statistics --- 00:10:23.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.642 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:23.642 00:10:23.642 --- 10.0.0.1 ping statistics --- 00:10:23.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.642 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=788380 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 788380 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 788380 ']' 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.642 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.642 [2024-11-20 12:24:28.997458] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:23.642 [2024-11-20 12:24:28.997503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.642 [2024-11-20 12:24:29.076955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.642 [2024-11-20 12:24:29.113076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.642 [2024-11-20 12:24:29.113110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.642 [2024-11-20 12:24:29.113116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.642 [2024-11-20 12:24:29.113121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.642 [2024-11-20 12:24:29.113126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.642 [2024-11-20 12:24:29.113664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 [2024-11-20 12:24:29.856461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 Malloc0 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.212 [2024-11-20 12:24:29.906480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=788544 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 788544 /var/tmp/bdevperf.sock 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 788544 ']' 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.212 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.213 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.213 12:24:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.213 [2024-11-20 12:24:29.955687] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:24.213 [2024-11-20 12:24:29.955728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788544 ] 00:10:24.472 [2024-11-20 12:24:30.029294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.472 [2024-11-20 12:24:30.075581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.472 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.472 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:24.472 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:24.472 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.472 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.731 NVMe0n1 00:10:24.731 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.731 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:24.731 Running I/O for 10 seconds... 00:10:26.607 13259.00 IOPS, 51.79 MiB/s [2024-11-20T11:24:33.749Z] 13306.50 IOPS, 51.98 MiB/s [2024-11-20T11:24:34.686Z] 13480.00 IOPS, 52.66 MiB/s [2024-11-20T11:24:35.623Z] 13547.75 IOPS, 52.92 MiB/s [2024-11-20T11:24:36.561Z] 13546.20 IOPS, 52.91 MiB/s [2024-11-20T11:24:37.498Z] 13631.17 IOPS, 53.25 MiB/s [2024-11-20T11:24:38.435Z] 13658.43 IOPS, 53.35 MiB/s [2024-11-20T11:24:39.812Z] 13670.88 IOPS, 53.40 MiB/s [2024-11-20T11:24:40.749Z] 13703.44 IOPS, 53.53 MiB/s [2024-11-20T11:24:40.750Z] 13697.90 IOPS, 53.51 MiB/s 00:10:34.986 Latency(us) 00:10:34.986 [2024-11-20T11:24:40.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.986 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:34.986 Verification LBA range: start 0x0 length 0x4000 00:10:34.986 NVMe0n1 : 10.04 13740.15 53.67 0.00 0.00 74312.21 11856.06 48377.48 00:10:34.986 [2024-11-20T11:24:40.750Z] =================================================================================================================== 00:10:34.986 [2024-11-20T11:24:40.750Z] Total : 13740.15 53.67 0.00 0.00 74312.21 11856.06 48377.48 00:10:34.986 { 00:10:34.986 "results": [ 00:10:34.986 { 00:10:34.986 "job": "NVMe0n1", 00:10:34.986 "core_mask": "0x1", 00:10:34.986 "workload": "verify", 00:10:34.986 "status": "finished", 00:10:34.986 "verify_range": { 00:10:34.986 "start": 0, 00:10:34.986 "length": 16384 00:10:34.986 }, 00:10:34.986 "queue_depth": 1024, 00:10:34.986 "io_size": 4096, 00:10:34.986 "runtime": 10.043778, 00:10:34.986 "iops": 13740.148378428914, 00:10:34.986 "mibps": 53.672454603237945, 00:10:34.986 "io_failed": 0, 00:10:34.986 "io_timeout": 0, 00:10:34.986 "avg_latency_us": 74312.21008723788, 00:10:34.986 "min_latency_us": 11856.058181818182, 00:10:34.986 "max_latency_us": 48377.483636363635 00:10:34.986 } 00:10:34.986 ], 00:10:34.986 "core_count": 1 00:10:34.986 } 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 788544 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 788544 ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 788544 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788544 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788544' 00:10:34.986 killing process with pid 788544 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 788544 00:10:34.986 Received shutdown signal, test time was about 10.000000 seconds 00:10:34.986 00:10:34.986 Latency(us) 00:10:34.986 [2024-11-20T11:24:40.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.986 [2024-11-20T11:24:40.750Z] =================================================================================================================== 00:10:34.986 [2024-11-20T11:24:40.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 788544 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.986 rmmod nvme_tcp 00:10:34.986 rmmod nvme_fabrics 00:10:34.986 rmmod nvme_keyring 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 788380 ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 788380 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 788380 ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 788380 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.986 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788380 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788380' 00:10:35.246 killing process with pid 788380 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 788380 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 788380 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.246 12:24:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.784 00:10:37.784 real 0m20.496s 00:10:37.784 user 0m24.037s 00:10:37.784 sys 0m5.977s 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.784 ************************************ 00:10:37.784 END TEST nvmf_queue_depth 00:10:37.784 ************************************ 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.784 ************************************ 00:10:37.784 START TEST nvmf_target_multipath 00:10:37.784 ************************************ 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.784 * Looking for test storage... 00:10:37.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.784 --rc genhtml_branch_coverage=1 00:10:37.784 --rc genhtml_function_coverage=1 00:10:37.784 --rc genhtml_legend=1 00:10:37.784 --rc geninfo_all_blocks=1 00:10:37.784 --rc geninfo_unexecuted_blocks=1 00:10:37.784 00:10:37.784 ' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.784 --rc genhtml_branch_coverage=1 00:10:37.784 --rc genhtml_function_coverage=1 00:10:37.784 --rc genhtml_legend=1 00:10:37.784 --rc geninfo_all_blocks=1 00:10:37.784 --rc geninfo_unexecuted_blocks=1 00:10:37.784 00:10:37.784 ' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.784 --rc genhtml_branch_coverage=1 00:10:37.784 --rc genhtml_function_coverage=1 00:10:37.784 --rc genhtml_legend=1 00:10:37.784 --rc geninfo_all_blocks=1 00:10:37.784 --rc geninfo_unexecuted_blocks=1 00:10:37.784 00:10:37.784 ' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.784 --rc genhtml_branch_coverage=1 00:10:37.784 --rc genhtml_function_coverage=1 00:10:37.784 --rc genhtml_legend=1 00:10:37.784 --rc geninfo_all_blocks=1 00:10:37.784 --rc geninfo_unexecuted_blocks=1 00:10:37.784 00:10:37.784 ' 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.784 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.785 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.356 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:10:44.357 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:10:44.357 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:10:44.357 Found net devices under 0000:1a:00.0: cvl_0_0 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:10:44.357 Found net devices under 0000:1a:00.1: cvl_0_1 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:10:44.357 00:10:44.357 --- 10.0.0.2 ping statistics --- 00:10:44.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.357 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:44.357 00:10:44.357 --- 10.0.0.1 ping statistics --- 00:10:44.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.357 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.357 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:44.358 only one NIC for nvmf test 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.358 rmmod nvme_tcp 00:10:44.358 rmmod nvme_fabrics 00:10:44.358 rmmod nvme_keyring 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.358 12:24:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.263 00:10:46.263 real 0m8.654s 00:10:46.263 user 0m1.882s 00:10:46.263 sys 0m4.767s 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:46.263 ************************************ 00:10:46.263 END TEST nvmf_target_multipath 00:10:46.263 ************************************ 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.263 ************************************ 00:10:46.263 START TEST nvmf_zcopy 00:10:46.263 ************************************ 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.263 * Looking for test storage... 00:10:46.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.263 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.263 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.264 --rc genhtml_branch_coverage=1 00:10:46.264 --rc genhtml_function_coverage=1 00:10:46.264 --rc genhtml_legend=1 00:10:46.264 --rc geninfo_all_blocks=1 00:10:46.264 --rc geninfo_unexecuted_blocks=1 00:10:46.264 00:10:46.264 ' 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.264 --rc genhtml_branch_coverage=1 00:10:46.264 --rc genhtml_function_coverage=1 00:10:46.264 --rc genhtml_legend=1 00:10:46.264 --rc geninfo_all_blocks=1 00:10:46.264 --rc geninfo_unexecuted_blocks=1 00:10:46.264 00:10:46.264 ' 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.264 --rc genhtml_branch_coverage=1 00:10:46.264 --rc genhtml_function_coverage=1 00:10:46.264 --rc genhtml_legend=1 00:10:46.264 --rc geninfo_all_blocks=1 00:10:46.264 --rc geninfo_unexecuted_blocks=1 00:10:46.264 00:10:46.264 ' 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.264 --rc genhtml_branch_coverage=1 00:10:46.264 --rc genhtml_function_coverage=1 00:10:46.264 --rc genhtml_legend=1 00:10:46.264 --rc geninfo_all_blocks=1 00:10:46.264 --rc geninfo_unexecuted_blocks=1 00:10:46.264 00:10:46.264 ' 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.264 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.524 12:24:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:10:53.099 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.099 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:10:53.099 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:10:53.100 Found net devices under 0000:1a:00.0: cvl_0_0 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:10:53.100 Found net devices under 0000:1a:00.1: cvl_0_1 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.100 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:10:53.100 00:10:53.100 --- 10.0.0.2 ping statistics --- 00:10:53.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.100 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:10:53.100 00:10:53.100 --- 10.0.0.1 ping statistics --- 00:10:53.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.100 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=798057 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 798057 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 798057 ']' 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.100 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.100 [2024-11-20 12:24:58.273852] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:53.100 [2024-11-20 12:24:58.273891] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.100 [2024-11-20 12:24:58.350762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.100 [2024-11-20 12:24:58.387104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.100 [2024-11-20 12:24:58.387137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.100 [2024-11-20 12:24:58.387144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.100 [2024-11-20 12:24:58.387149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.100 [2024-11-20 12:24:58.387154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.100 [2024-11-20 12:24:58.387753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.359 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 [2024-11-20 12:24:59.122492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 [2024-11-20 12:24:59.142682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 malloc0 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:53.620 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.621 { 00:10:53.621 "params": { 00:10:53.621 "name": "Nvme$subsystem", 00:10:53.621 "trtype": "$TEST_TRANSPORT", 00:10:53.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.621 "adrfam": "ipv4", 00:10:53.621 "trsvcid": "$NVMF_PORT", 00:10:53.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.621 "hdgst": ${hdgst:-false}, 00:10:53.621 "ddgst": ${ddgst:-false} 00:10:53.621 }, 00:10:53.621 "method": "bdev_nvme_attach_controller" 00:10:53.621 } 00:10:53.621 EOF 00:10:53.621 )") 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:53.621 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.621 "params": { 00:10:53.621 "name": "Nvme1", 00:10:53.621 "trtype": "tcp", 00:10:53.621 "traddr": "10.0.0.2", 00:10:53.621 "adrfam": "ipv4", 00:10:53.621 "trsvcid": "4420", 00:10:53.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.621 "hdgst": false, 00:10:53.621 "ddgst": false 00:10:53.621 }, 00:10:53.621 "method": "bdev_nvme_attach_controller" 00:10:53.621 }' 00:10:53.621 [2024-11-20 12:24:59.227964] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:53.621 [2024-11-20 12:24:59.228004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798090 ] 00:10:53.621 [2024-11-20 12:24:59.302620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.621 [2024-11-20 12:24:59.343841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.881 Running I/O for 10 seconds... 00:10:55.826 9431.00 IOPS, 73.68 MiB/s [2024-11-20T11:25:02.967Z] 9550.00 IOPS, 74.61 MiB/s [2024-11-20T11:25:03.903Z] 9586.33 IOPS, 74.89 MiB/s [2024-11-20T11:25:04.841Z] 9606.50 IOPS, 75.05 MiB/s [2024-11-20T11:25:05.778Z] 9623.00 IOPS, 75.18 MiB/s [2024-11-20T11:25:06.714Z] 9628.17 IOPS, 75.22 MiB/s [2024-11-20T11:25:07.651Z] 9637.57 IOPS, 75.29 MiB/s [2024-11-20T11:25:09.030Z] 9638.00 IOPS, 75.30 MiB/s [2024-11-20T11:25:09.966Z] 9641.56 IOPS, 75.32 MiB/s [2024-11-20T11:25:09.966Z] 9648.70 IOPS, 75.38 MiB/s 00:11:04.202 Latency(us) 00:11:04.202 [2024-11-20T11:25:09.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.202 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:04.202 Verification LBA range: start 0x0 length 0x1000 00:11:04.202 Nvme1n1 : 10.01 9650.71 75.40 0.00 0.00 13225.56 517.59 23116.33 00:11:04.202 [2024-11-20T11:25:09.966Z] =================================================================================================================== 00:11:04.202 [2024-11-20T11:25:09.966Z] Total : 9650.71 75.40 0.00 0.00 13225.56 517.59 23116.33 00:11:04.202 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=800132 00:11:04.202 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:04.202 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.202 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:04.203 { 00:11:04.203 "params": { 00:11:04.203 "name": "Nvme$subsystem", 00:11:04.203 "trtype": "$TEST_TRANSPORT", 00:11:04.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.203 "adrfam": "ipv4", 00:11:04.203 "trsvcid": "$NVMF_PORT", 00:11:04.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.203 "hdgst": ${hdgst:-false}, 00:11:04.203 "ddgst": ${ddgst:-false} 00:11:04.203 }, 00:11:04.203 "method": "bdev_nvme_attach_controller" 00:11:04.203 } 00:11:04.203 EOF 00:11:04.203 )") 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:04.203 [2024-11-20 12:25:09.773576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.773606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:04.203 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:04.203 "params": { 00:11:04.203 "name": "Nvme1", 00:11:04.203 "trtype": "tcp", 00:11:04.203 "traddr": "10.0.0.2", 00:11:04.203 "adrfam": "ipv4", 00:11:04.203 "trsvcid": "4420", 00:11:04.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.203 "hdgst": false, 00:11:04.203 "ddgst": false 00:11:04.203 }, 00:11:04.203 "method": "bdev_nvme_attach_controller" 00:11:04.203 }' 00:11:04.203 [2024-11-20 12:25:09.785571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.785583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.797596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.797606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.809625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.809635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.812353] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:04.203 [2024-11-20 12:25:09.812394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800132 ] 00:11:04.203 [2024-11-20 12:25:09.821659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.821670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.833689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.833700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.845722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.845737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.857754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.857764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.869785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.869796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.881819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.881828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.885623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.203 [2024-11-20 12:25:09.893849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.893861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.905884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.905896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.917912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.917922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.924445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.203 [2024-11-20 12:25:09.929942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.929952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.941994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.942013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.203 [2024-11-20 12:25:09.954027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.203 [2024-11-20 12:25:09.954045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:09.966054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:09.966068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:09.978085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:09.978096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:09.990114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:09.990126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.002145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.002155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.014215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.014241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.026243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.026265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.038257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.038276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.050280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.050290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.062311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.062326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.074350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.074365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.086382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.086398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.098417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.098431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.145643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.145662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.154561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.154573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 Running I/O for 5 seconds... 00:11:04.463 [2024-11-20 12:25:10.170428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.170448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.183523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.183543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.197084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.197102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.210363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.210382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.463 [2024-11-20 12:25:10.223260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.463 [2024-11-20 12:25:10.223279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.236900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.236920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.249876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.249896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.262581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.262600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.274947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.274965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.287375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.287393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.300745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.300764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.314032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.314051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.327222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.327241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.340818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.340836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.354338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.354356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.367635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.367653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.380776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.380795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.393869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.393888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.407759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.407777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.420837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.420856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.433969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.433987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.446769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.446788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.460056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.460075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.722 [2024-11-20 12:25:10.472932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.722 [2024-11-20 12:25:10.472952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.486266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.486286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.499601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.499620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.512750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.512769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.526132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.526150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.539365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.539383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.552329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.552348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.565713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.565732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.578750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.578768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.592024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.592042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.605322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.605341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.618299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.618317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.631705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.631724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.645010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.645029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.658040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.658059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.671385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.671404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.684643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.684662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.697328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.697346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.710870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.710888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.723991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.724009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.981 [2024-11-20 12:25:10.737129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.981 [2024-11-20 12:25:10.737147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.750606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.750626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.763872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.763891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.776679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.776698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.789439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.789457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.802878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.802896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.816195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.816214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.829207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.829226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.842295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.842316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.855810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.855830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.868790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.868810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.882097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.882117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.894970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.894990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.908362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.908383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.921191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.921211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.933590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.933609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.946699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.946718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.960054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.960073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.973531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.973550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.987002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.987021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.241 [2024-11-20 12:25:10.999772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.241 [2024-11-20 12:25:10.999791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.013097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.013116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.026502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.026521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.039393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.039419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.053011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.053031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.066082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.066102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.079239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.079258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.092599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.092619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.105692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.105711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.118883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.118902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.132242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.132262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.145443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.145463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.158550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.158569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 18906.00 IOPS, 147.70 MiB/s [2024-11-20T11:25:11.264Z] [2024-11-20 12:25:11.171563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.171582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.185029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.185048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.197551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.197570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.210880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.210899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.224167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.224185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.237385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.237404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.500 [2024-11-20 12:25:11.250523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.500 [2024-11-20 12:25:11.250542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.264095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.264115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.277556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.277575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.290064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.290083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.302917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.302935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.316738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.316757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.330053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.330075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.342878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.342896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.356033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.356053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.369506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.369524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.382149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.382167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.395472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.395490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.408649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.408668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.422026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.422045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.435140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.435158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.448368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.448387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.461659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.461677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.474834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.474853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.488248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.488267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.501307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.501326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.760 [2024-11-20 12:25:11.514845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.760 [2024-11-20 12:25:11.514865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.528005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.528024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.540986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.541005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.553888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.553907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.567303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.567321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.580805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.580826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.593589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.593607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.606865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.606883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.620114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.620132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.632847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.632866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.647054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.647073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.660229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.660247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.673182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.673201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.686537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.686555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.700051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.700069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.709254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.709272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.722559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.722578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.735916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.735934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.748864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.748881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.762450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.762467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.019 [2024-11-20 12:25:11.775218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.019 [2024-11-20 12:25:11.775236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.788617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.788636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.801810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.801828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.814353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.814371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.827702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.827724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.840708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.840727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.853913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.853931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.867150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.867168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.880059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.880078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.893386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.893404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.906946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.906965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.920370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.920388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.929282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.929300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.943098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.943116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.956810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.956828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.969903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.969921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.982492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.982510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:11.995115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:11.995133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:12.008141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:12.008160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:12.021668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:12.021686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.279 [2024-11-20 12:25:12.034800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.279 [2024-11-20 12:25:12.034819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.048048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.048067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.060661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.060679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.073077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.073095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.086615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.086633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.099800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.099818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.113096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.113114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.125732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.125750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.139454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.139472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.152749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.152767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 18972.00 IOPS, 148.22 MiB/s [2024-11-20T11:25:12.302Z] [2024-11-20 12:25:12.166313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.166331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.179505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.179523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.192848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.192867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.206336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.206356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.219233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.219253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.231976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.231993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.244300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.244318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.257598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.257616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.270643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.270660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.284142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.284161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.538 [2024-11-20 12:25:12.297517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.538 [2024-11-20 12:25:12.297537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.310667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.310687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.323648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.323669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.337067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.337087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.350757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.350777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.364006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.364025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.377611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.377629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.391166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.391185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.404767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.404786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.417998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.418016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.431428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.431446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.444941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.444959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.457529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.457547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.471029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.471047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.484706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.484724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.498293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.498312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.511282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.511301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.524678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.524697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.537974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.537992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.797 [2024-11-20 12:25:12.551034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.797 [2024-11-20 12:25:12.551054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.563796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.563820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.576432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.576450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.588525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.588544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.601196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.601215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.614319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.614338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.627465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.627484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.056 [2024-11-20 12:25:12.641019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.056 [2024-11-20 12:25:12.641038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.653879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.653897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.666979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.666998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.680418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.680437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.693897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.693917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.706944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.706963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.720050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.720069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.733508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.733526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.746478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.746496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.756118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.756135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.769380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.769398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.782792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.782810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.796048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.796066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.057 [2024-11-20 12:25:12.809323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.057 [2024-11-20 12:25:12.809346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.822109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.822130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.835052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.835071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.848062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.848080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.861308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.861327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.874559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.874577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.887921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.887940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.900518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.900537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.909709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.909726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.922294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.922313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.935374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.935392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.948374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.948392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.961713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.961731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.974980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.974999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:12.988091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:12.988109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:13.001899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:13.001918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:13.015009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:13.015028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:13.027771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:13.027789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:13.041270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:13.041288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:13.054367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:13.054391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.316 [2024-11-20 12:25:13.067160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.316 [2024-11-20 12:25:13.067178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.575 [2024-11-20 12:25:13.080691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.575 [2024-11-20 12:25:13.080711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.093774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.093794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.107251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.107270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.119943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.119962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.132906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.132924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.146395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.146419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.158991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.159008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 18989.33 IOPS, 148.35 MiB/s [2024-11-20T11:25:13.340Z] [2024-11-20 12:25:13.172259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.172278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.185340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.185359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.199101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.199120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.212449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.212467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.225151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.225169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.239076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.239096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.251932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.251950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.265458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.265476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.278060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.278078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.291692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.291711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.304869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.304888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.318702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.318720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.576 [2024-11-20 12:25:13.332201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.576 [2024-11-20 12:25:13.332219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.345359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.345380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.358393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.358416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.371729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.371748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.384415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.384434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.398017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.398035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.411222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.411240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.424285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.424303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.437427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.437445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.451061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.451080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.463476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.463494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.476380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.476398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.489531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.489549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.502804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.502823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.516006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.516024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.528920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.528938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.542414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.542431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.556130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.556148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.569152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.569170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.582462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.582480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.835 [2024-11-20 12:25:13.595713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.835 [2024-11-20 12:25:13.595731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.609113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.609133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.622456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.622474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.635887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.635905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.649265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.649285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.661924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.661942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.675011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.675030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.688489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.688507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.701541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.701559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.714792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.714811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.728316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.728335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.741501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.741521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.754881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.754900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.767845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.767865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.780292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.780312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.793646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.793665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.806305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.806324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.819228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.819247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.832727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.832747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.095 [2024-11-20 12:25:13.845837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.095 [2024-11-20 12:25:13.845857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.859278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.859298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.872142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.872160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.884691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.884709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.897926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.897945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.911734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.911753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.924617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.924635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.937575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.937594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.951100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.951119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.964490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.964509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.977792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.977811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:13.991297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:13.991316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.003851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.003869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.016812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.016832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.030273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.030292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.043865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.043884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.057135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.057154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.070339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.070357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.083757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.083776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.096841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.096861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.355 [2024-11-20 12:25:14.109932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.355 [2024-11-20 12:25:14.109950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 [2024-11-20 12:25:14.123185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.123205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 [2024-11-20 12:25:14.136389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.136409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 [2024-11-20 12:25:14.149157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.149176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 [2024-11-20 12:25:14.162604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.162622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 18990.75 IOPS, 148.37 MiB/s [2024-11-20T11:25:14.378Z] [2024-11-20 12:25:14.175953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.175972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 [2024-11-20 12:25:14.188613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.188632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.614 [2024-11-20 12:25:14.201766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.614 [2024-11-20 12:25:14.201784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.215036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.215054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.228127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.228144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.241108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.241126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.254667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.254687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.267833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.267852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.280773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.280792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.294277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.294301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.307611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.307630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.320375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.320394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.333817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.333836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.347202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.347221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.360646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.360666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.615 [2024-11-20 12:25:14.373927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.615 [2024-11-20 12:25:14.373945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.386995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.387014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.400297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.400316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.413550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.413569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.426623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.426641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.440334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.440353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.452689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.452707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.465505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.465523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.479288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.479306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.492400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.492422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.505181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.505200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.518493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.518512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.531581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.531600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.544741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.544766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.558070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.558089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.571696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.571714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.584653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.584672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.597522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.597540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.610666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.610684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.874 [2024-11-20 12:25:14.623565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.874 [2024-11-20 12:25:14.623583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.637158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.637177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.649871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.649889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.662932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.662951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.676128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.676146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.688773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.688792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.702322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.702340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.715649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.715667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.728977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.728995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.741627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.741645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.756005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.756024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.769090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.769109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.782234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.782252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.795721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.795745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.808701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.808720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.822403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.822426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.835943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.835962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.849036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.849055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.862543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.862561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.875890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.875908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.133 [2024-11-20 12:25:14.888972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.133 [2024-11-20 12:25:14.888991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.901823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.901843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.914850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.914869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.927796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.927814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.940970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.940988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.954668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.954686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.967685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.967703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.981100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.981118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:14.990488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:14.990506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.003886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.003904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.017277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.017295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.030606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.030624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.043888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.043906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.056324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.056341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.069466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.069484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.082618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.082637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.096007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.096026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.108894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.108913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.122218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.122237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.135219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.135238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.392 [2024-11-20 12:25:15.148533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.392 [2024-11-20 12:25:15.148551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.161596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.161616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 18989.40 IOPS, 148.35 MiB/s [2024-11-20T11:25:15.416Z] [2024-11-20 12:25:15.174257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.174275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 00:11:09.652 Latency(us) 00:11:09.652 [2024-11-20T11:25:15.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.652 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:09.652 Nvme1n1 : 5.01 18992.24 148.38 0.00 0.00 6733.85 3083.17 15609.48 00:11:09.652 [2024-11-20T11:25:15.416Z] =================================================================================================================== 00:11:09.652 [2024-11-20T11:25:15.416Z] Total : 18992.24 148.38 0.00 0.00 6733.85 3083.17 15609.48 00:11:09.652 [2024-11-20 12:25:15.184173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.184190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.196197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.196211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.208243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.208262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.220265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.220282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.232295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.232309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.244324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.244338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.256355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.256369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.268387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.268401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.280425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.280440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.292450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.292461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.304480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.304492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.316510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.316521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 [2024-11-20 12:25:15.328542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.652 [2024-11-20 12:25:15.328552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (800132) - No such process 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 800132 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 delay0 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.652 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.653 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:09.911 [2024-11-20 12:25:15.433442] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:16.475 [2024-11-20 12:25:21.644446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9020 is same with the state(6) to be set 00:11:16.475 [2024-11-20 12:25:21.644487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9020 is same with the state(6) to be set 00:11:16.475 [2024-11-20 12:25:21.644495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9020 is same with the state(6) to be set 00:11:16.475 Initializing NVMe Controllers 00:11:16.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:16.476 Initialization complete. Launching workers. 00:11:16.476 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 10676 00:11:16.476 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10890, failed to submit 77 00:11:16.476 success 10748, unsuccessful 142, failed 0 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.476 rmmod nvme_tcp 00:11:16.476 rmmod nvme_fabrics 00:11:16.476 rmmod nvme_keyring 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 798057 ']' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 798057 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 798057 ']' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 798057 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798057 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798057' 00:11:16.476 killing process with pid 798057 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 798057 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 798057 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.476 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.385 00:11:18.385 real 0m32.186s 00:11:18.385 user 0m43.732s 00:11:18.385 sys 0m10.241s 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.385 ************************************ 00:11:18.385 END TEST nvmf_zcopy 00:11:18.385 ************************************ 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.385 ************************************ 00:11:18.385 START TEST nvmf_nmic 00:11:18.385 ************************************ 00:11:18.385 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:18.713 * Looking for test storage... 00:11:18.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.713 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.714 --rc genhtml_branch_coverage=1 00:11:18.714 --rc genhtml_function_coverage=1 00:11:18.714 --rc genhtml_legend=1 00:11:18.714 --rc geninfo_all_blocks=1 00:11:18.714 --rc geninfo_unexecuted_blocks=1 00:11:18.714 00:11:18.714 ' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.714 --rc genhtml_branch_coverage=1 00:11:18.714 --rc genhtml_function_coverage=1 00:11:18.714 --rc genhtml_legend=1 00:11:18.714 --rc geninfo_all_blocks=1 00:11:18.714 --rc geninfo_unexecuted_blocks=1 00:11:18.714 00:11:18.714 ' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.714 --rc genhtml_branch_coverage=1 00:11:18.714 --rc genhtml_function_coverage=1 00:11:18.714 --rc genhtml_legend=1 00:11:18.714 --rc geninfo_all_blocks=1 00:11:18.714 --rc geninfo_unexecuted_blocks=1 00:11:18.714 00:11:18.714 ' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.714 --rc genhtml_branch_coverage=1 00:11:18.714 --rc genhtml_function_coverage=1 00:11:18.714 --rc genhtml_legend=1 00:11:18.714 --rc geninfo_all_blocks=1 00:11:18.714 --rc geninfo_unexecuted_blocks=1 00:11:18.714 00:11:18.714 ' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.714 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:11:25.359 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:11:25.359 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.359 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:11:25.360 Found net devices under 0000:1a:00.0: cvl_0_0 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:11:25.360 Found net devices under 0000:1a:00.1: cvl_0_1 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:11:25.360 00:11:25.360 --- 10.0.0.2 ping statistics --- 00:11:25.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.360 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:25.360 00:11:25.360 --- 10.0.0.1 ping statistics --- 00:11:25.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.360 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=806066 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 806066 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 806066 ']' 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.360 12:25:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.360 [2024-11-20 12:25:30.563371] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:25.360 [2024-11-20 12:25:30.563423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.360 [2024-11-20 12:25:30.639954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.360 [2024-11-20 12:25:30.678300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.360 [2024-11-20 12:25:30.678338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.360 [2024-11-20 12:25:30.678344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.360 [2024-11-20 12:25:30.678349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.360 [2024-11-20 12:25:30.678353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.360 [2024-11-20 12:25:30.679933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.360 [2024-11-20 12:25:30.680046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.360 [2024-11-20 12:25:30.680138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.360 [2024-11-20 12:25:30.680139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.620 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.620 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:25.620 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.620 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.620 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-11-20 12:25:31.420882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 Malloc0 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-11-20 12:25:31.487236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:25.885 test case1: single bdev can't be used in multiple subsystems 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-11-20 12:25:31.519114] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:25.885 [2024-11-20 12:25:31.519131] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:25.885 [2024-11-20 12:25:31.519138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.885 request: 00:11:25.885 { 00:11:25.885 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:25.885 "namespace": { 00:11:25.885 "bdev_name": "Malloc0", 00:11:25.885 "no_auto_visible": false 00:11:25.885 }, 00:11:25.885 "method": "nvmf_subsystem_add_ns", 00:11:25.885 "req_id": 1 00:11:25.885 } 00:11:25.885 Got JSON-RPC error response 00:11:25.885 response: 00:11:25.885 { 00:11:25.885 "code": -32602, 00:11:25.885 "message": "Invalid parameters" 00:11:25.885 } 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:25.885 Adding namespace failed - expected result. 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:25.885 test case2: host connect to nvmf target in multiple paths 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-11-20 12:25:31.531247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.885 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.264 12:25:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:28.643 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.643 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.643 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.643 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:28.643 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:30.549 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:30.549 [global] 00:11:30.549 thread=1 00:11:30.549 invalidate=1 00:11:30.549 rw=write 00:11:30.549 time_based=1 00:11:30.549 runtime=1 00:11:30.549 ioengine=libaio 00:11:30.549 direct=1 00:11:30.549 bs=4096 00:11:30.549 iodepth=1 00:11:30.549 norandommap=0 00:11:30.549 numjobs=1 00:11:30.549 00:11:30.549 verify_dump=1 00:11:30.549 verify_backlog=512 00:11:30.549 verify_state_save=0 00:11:30.549 do_verify=1 00:11:30.549 verify=crc32c-intel 00:11:30.549 [job0] 00:11:30.549 filename=/dev/nvme0n1 00:11:30.549 Could not set queue depth (nvme0n1) 00:11:31.115 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.115 fio-3.35 00:11:31.115 Starting 1 thread 00:11:32.053 00:11:32.053 job0: (groupid=0, jobs=1): err= 0: pid=807295: Wed Nov 20 12:25:37 2024 00:11:32.053 read: IOPS=22, BW=90.4KiB/s (92.5kB/s)(92.0KiB/1018msec) 00:11:32.053 slat (nsec): min=9403, max=23444, avg=21633.30, stdev=2732.69 00:11:32.053 clat (usec): min=40797, max=41023, avg=40962.65, stdev=48.62 00:11:32.053 lat (usec): min=40806, max=41045, avg=40984.28, stdev=50.40 00:11:32.053 clat percentiles (usec): 00:11:32.053 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:32.053 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:32.053 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:32.053 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:32.053 | 99.99th=[41157] 00:11:32.053 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:32.053 slat (nsec): min=9306, max=38485, avg=10482.05, stdev=1866.17 00:11:32.053 clat (usec): min=112, max=371, avg=134.30, stdev=14.21 00:11:32.053 lat (usec): min=124, max=410, avg=144.79, stdev=15.11 00:11:32.053 clat percentiles (usec): 00:11:32.053 | 1.00th=[ 117], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 130], 00:11:32.053 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 135], 00:11:32.053 | 70.00th=[ 137], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 143], 00:11:32.053 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 371], 99.95th=[ 371], 00:11:32.053 | 99.99th=[ 371] 00:11:32.053 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:32.053 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:32.053 lat (usec) : 250=95.33%, 500=0.37% 00:11:32.053 lat (msec) : 50=4.30% 00:11:32.053 cpu : usr=0.39%, sys=0.39%, ctx=535, majf=0, minf=1 00:11:32.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:32.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.053 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:32.053 00:11:32.053 Run status group 0 (all jobs): 00:11:32.053 READ: bw=90.4KiB/s (92.5kB/s), 90.4KiB/s-90.4KiB/s (92.5kB/s-92.5kB/s), io=92.0KiB (94.2kB), run=1018-1018msec 00:11:32.053 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:11:32.053 00:11:32.053 Disk stats (read/write): 00:11:32.053 nvme0n1: ios=70/512, merge=0/0, ticks=1125/70, in_queue=1195, util=99.70% 00:11:32.053 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.312 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.312 rmmod nvme_tcp 00:11:32.312 rmmod nvme_fabrics 00:11:32.312 rmmod nvme_keyring 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 806066 ']' 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 806066 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 806066 ']' 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 806066 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.312 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 806066 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 806066' 00:11:32.572 killing process with pid 806066 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 806066 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 806066 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.572 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.573 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.112 00:11:35.112 real 0m16.248s 00:11:35.112 user 0m40.571s 00:11:35.112 sys 0m5.442s 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:35.112 ************************************ 00:11:35.112 END TEST nvmf_nmic 00:11:35.112 ************************************ 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.112 ************************************ 00:11:35.112 START TEST nvmf_fio_target 00:11:35.112 ************************************ 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:35.112 * Looking for test storage... 00:11:35.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.112 --rc genhtml_branch_coverage=1 00:11:35.112 --rc genhtml_function_coverage=1 00:11:35.112 --rc genhtml_legend=1 00:11:35.112 --rc geninfo_all_blocks=1 00:11:35.112 --rc geninfo_unexecuted_blocks=1 00:11:35.112 00:11:35.112 ' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.112 --rc genhtml_branch_coverage=1 00:11:35.112 --rc genhtml_function_coverage=1 00:11:35.112 --rc genhtml_legend=1 00:11:35.112 --rc geninfo_all_blocks=1 00:11:35.112 --rc geninfo_unexecuted_blocks=1 00:11:35.112 00:11:35.112 ' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.112 --rc genhtml_branch_coverage=1 00:11:35.112 --rc genhtml_function_coverage=1 00:11:35.112 --rc genhtml_legend=1 00:11:35.112 --rc geninfo_all_blocks=1 00:11:35.112 --rc geninfo_unexecuted_blocks=1 00:11:35.112 00:11:35.112 ' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.112 --rc genhtml_branch_coverage=1 00:11:35.112 --rc genhtml_function_coverage=1 00:11:35.112 --rc genhtml_legend=1 00:11:35.112 --rc geninfo_all_blocks=1 00:11:35.112 --rc geninfo_unexecuted_blocks=1 00:11:35.112 00:11:35.112 ' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.112 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:11:41.684 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:11:41.684 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:11:41.684 Found net devices under 0000:1a:00.0: cvl_0_0 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.684 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:11:41.685 Found net devices under 0000:1a:00.1: cvl_0_1 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:11:41.685 00:11:41.685 --- 10.0.0.2 ping statistics --- 00:11:41.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.685 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:11:41.685 00:11:41.685 --- 10.0.0.1 ping statistics --- 00:11:41.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.685 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=811342 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 811342 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 811342 ']' 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.685 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.685 [2024-11-20 12:25:46.843623] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:41.685 [2024-11-20 12:25:46.843670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.685 [2024-11-20 12:25:46.920565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.685 [2024-11-20 12:25:46.960420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.685 [2024-11-20 12:25:46.960458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.685 [2024-11-20 12:25:46.960464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.685 [2024-11-20 12:25:46.960469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.685 [2024-11-20 12:25:46.960473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.685 [2024-11-20 12:25:46.962084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.685 [2024-11-20 12:25:46.962129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.685 [2024-11-20 12:25:46.962252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.685 [2024-11-20 12:25:46.962253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.945 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.945 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:41.945 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.945 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.945 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.204 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.204 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:42.204 [2024-11-20 12:25:47.872104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.204 12:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.463 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:42.463 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.723 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:42.723 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.981 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:42.981 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.981 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:42.981 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:43.240 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.499 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:43.499 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.758 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:43.758 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:43.758 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:43.758 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:44.017 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.276 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:44.276 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.536 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:44.536 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.536 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.794 [2024-11-20 12:25:50.399230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.794 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:45.053 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:45.053 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.431 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:46.431 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.431 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.431 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:46.432 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:46.432 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.339 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.339 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.339 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.598 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:48.598 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.598 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:48.598 12:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:48.598 [global] 00:11:48.598 thread=1 00:11:48.598 invalidate=1 00:11:48.598 rw=write 00:11:48.598 time_based=1 00:11:48.598 runtime=1 00:11:48.598 ioengine=libaio 00:11:48.598 direct=1 00:11:48.598 bs=4096 00:11:48.599 iodepth=1 00:11:48.599 norandommap=0 00:11:48.599 numjobs=1 00:11:48.599 00:11:48.599 verify_dump=1 00:11:48.599 verify_backlog=512 00:11:48.599 verify_state_save=0 00:11:48.599 do_verify=1 00:11:48.599 verify=crc32c-intel 00:11:48.599 [job0] 00:11:48.599 filename=/dev/nvme0n1 00:11:48.599 [job1] 00:11:48.599 filename=/dev/nvme0n2 00:11:48.599 [job2] 00:11:48.599 filename=/dev/nvme0n3 00:11:48.599 [job3] 00:11:48.599 filename=/dev/nvme0n4 00:11:48.599 Could not set queue depth (nvme0n1) 00:11:48.599 Could not set queue depth (nvme0n2) 00:11:48.599 Could not set queue depth (nvme0n3) 00:11:48.599 Could not set queue depth (nvme0n4) 00:11:48.857 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.857 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.857 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.857 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.857 fio-3.35 00:11:48.857 Starting 4 threads 00:11:50.248 00:11:50.248 job0: (groupid=0, jobs=1): err= 0: pid=812878: Wed Nov 20 12:25:55 2024 00:11:50.248 read: IOPS=2681, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:11:50.248 slat (nsec): min=6342, max=45602, avg=7778.90, stdev=1577.91 00:11:50.248 clat (usec): min=148, max=672, avg=190.88, stdev=32.73 00:11:50.248 lat (usec): min=156, max=681, avg=198.66, stdev=32.65 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:11:50.248 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:11:50.248 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 227], 95.00th=[ 253], 00:11:50.248 | 99.00th=[ 338], 99.50th=[ 392], 99.90th=[ 445], 99.95th=[ 490], 00:11:50.248 | 99.99th=[ 676] 00:11:50.248 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:50.248 slat (nsec): min=8417, max=39827, avg=11366.67, stdev=1974.47 00:11:50.248 clat (usec): min=96, max=3692, avg=135.46, stdev=67.40 00:11:50.248 lat (usec): min=106, max=3702, avg=146.83, stdev=67.66 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 119], 00:11:50.248 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 133], 00:11:50.248 | 70.00th=[ 139], 80.00th=[ 149], 90.00th=[ 165], 95.00th=[ 176], 00:11:50.248 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 255], 99.95th=[ 318], 00:11:50.248 | 99.99th=[ 3687] 00:11:50.248 bw ( KiB/s): min=12288, max=12288, per=55.91%, avg=12288.00, stdev= 0.00, samples=1 00:11:50.248 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:50.248 lat (usec) : 100=0.24%, 250=96.99%, 500=2.73%, 750=0.02% 00:11:50.248 lat (msec) : 4=0.02% 00:11:50.248 cpu : usr=4.00%, sys=7.20%, ctx=5757, majf=0, minf=1 00:11:50.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 issued rwts: total=2684,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.248 job1: (groupid=0, jobs=1): err= 0: pid=812879: Wed Nov 20 12:25:55 2024 00:11:50.248 read: IOPS=1500, BW=6002KiB/s (6146kB/s)(6008KiB/1001msec) 00:11:50.248 slat (nsec): min=6352, max=30055, avg=7611.36, stdev=1715.72 00:11:50.248 clat (usec): min=146, max=41241, avg=500.59, stdev=3512.29 00:11:50.248 lat (usec): min=153, max=41250, avg=508.20, stdev=3513.47 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:11:50.248 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:11:50.248 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 237], 00:11:50.248 | 99.00th=[ 306], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:50.248 | 99.99th=[41157] 00:11:50.248 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:50.248 slat (nsec): min=9426, max=39351, avg=10487.14, stdev=1380.73 00:11:50.248 clat (usec): min=92, max=306, avg=138.77, stdev=25.21 00:11:50.248 lat (usec): min=106, max=345, avg=149.25, stdev=25.41 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 117], 00:11:50.248 | 30.00th=[ 121], 40.00th=[ 126], 50.00th=[ 133], 60.00th=[ 141], 00:11:50.248 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:11:50.248 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 251], 99.95th=[ 306], 00:11:50.248 | 99.99th=[ 306] 00:11:50.248 bw ( KiB/s): min= 4096, max= 4096, per=18.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.248 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.248 lat (usec) : 100=0.43%, 250=97.53%, 500=1.65% 00:11:50.248 lat (msec) : 20=0.03%, 50=0.36% 00:11:50.248 cpu : usr=0.90%, sys=3.50%, ctx=3039, majf=0, minf=1 00:11:50.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 issued rwts: total=1502,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.248 job2: (groupid=0, jobs=1): err= 0: pid=812880: Wed Nov 20 12:25:55 2024 00:11:50.248 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:11:50.248 slat (nsec): min=9583, max=27526, avg=22652.32, stdev=3092.72 00:11:50.248 clat (usec): min=40940, max=42065, avg=41775.49, stdev=394.28 00:11:50.248 lat (usec): min=40963, max=42088, avg=41798.15, stdev=394.45 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:50.248 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:50.248 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:50.248 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:50.248 | 99.99th=[42206] 00:11:50.248 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:50.248 slat (nsec): min=10091, max=44104, avg=11753.54, stdev=2033.58 00:11:50.248 clat (usec): min=120, max=291, avg=151.85, stdev=26.58 00:11:50.248 lat (usec): min=132, max=335, avg=163.60, stdev=27.01 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:11:50.248 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:11:50.248 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 239], 00:11:50.248 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 293], 99.95th=[ 293], 00:11:50.248 | 99.99th=[ 293] 00:11:50.248 bw ( KiB/s): min= 4096, max= 4096, per=18.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.248 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.248 lat (usec) : 250=94.94%, 500=0.94% 00:11:50.248 lat (msec) : 50=4.12% 00:11:50.248 cpu : usr=0.50%, sys=0.40%, ctx=537, majf=0, minf=1 00:11:50.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.248 job3: (groupid=0, jobs=1): err= 0: pid=812881: Wed Nov 20 12:25:55 2024 00:11:50.248 read: IOPS=178, BW=714KiB/s (731kB/s)(732KiB/1025msec) 00:11:50.248 slat (nsec): min=7043, max=26261, avg=9622.79, stdev=4877.84 00:11:50.248 clat (usec): min=189, max=41303, avg=5080.90, stdev=13058.48 00:11:50.248 lat (usec): min=201, max=41312, avg=5090.52, stdev=13062.66 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 229], 20.00th=[ 235], 00:11:50.248 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:11:50.248 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[41157], 95.00th=[41157], 00:11:50.248 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:50.248 | 99.99th=[41157] 00:11:50.248 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:50.248 slat (nsec): min=9503, max=38707, avg=10436.64, stdev=1539.38 00:11:50.248 clat (usec): min=121, max=342, avg=168.63, stdev=18.05 00:11:50.248 lat (usec): min=131, max=381, avg=179.07, stdev=18.54 00:11:50.248 clat percentiles (usec): 00:11:50.248 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:11:50.248 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:11:50.248 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 196], 00:11:50.248 | 99.00th=[ 210], 99.50th=[ 239], 99.90th=[ 343], 99.95th=[ 343], 00:11:50.248 | 99.99th=[ 343] 00:11:50.248 bw ( KiB/s): min= 4096, max= 4096, per=18.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.248 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.248 lat (usec) : 250=87.34%, 500=9.35% 00:11:50.248 lat (msec) : 10=0.14%, 20=0.14%, 50=3.02% 00:11:50.248 cpu : usr=0.49%, sys=0.49%, ctx=695, majf=0, minf=1 00:11:50.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.248 issued rwts: total=183,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.248 00:11:50.248 Run status group 0 (all jobs): 00:11:50.248 READ: bw=16.7MiB/s (17.5MB/s), 87.6KiB/s-10.5MiB/s (89.7kB/s-11.0MB/s), io=17.2MiB (18.0MB), run=1001-1025msec 00:11:50.248 WRITE: bw=21.5MiB/s (22.5MB/s), 1998KiB/s-12.0MiB/s (2046kB/s-12.6MB/s), io=22.0MiB (23.1MB), run=1001-1025msec 00:11:50.248 00:11:50.248 Disk stats (read/write): 00:11:50.248 nvme0n1: ios=2400/2560, merge=0/0, ticks=438/325, in_queue=763, util=86.47% 00:11:50.248 nvme0n2: ios=1074/1185, merge=0/0, ticks=1482/159, in_queue=1641, util=98.27% 00:11:50.248 nvme0n3: ios=76/512, merge=0/0, ticks=1422/73, in_queue=1495, util=98.33% 00:11:50.248 nvme0n4: ios=178/512, merge=0/0, ticks=726/85, in_queue=811, util=89.68% 00:11:50.249 12:25:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:50.249 [global] 00:11:50.249 thread=1 00:11:50.249 invalidate=1 00:11:50.249 rw=randwrite 00:11:50.249 time_based=1 00:11:50.249 runtime=1 00:11:50.249 ioengine=libaio 00:11:50.249 direct=1 00:11:50.249 bs=4096 00:11:50.249 iodepth=1 00:11:50.249 norandommap=0 00:11:50.249 numjobs=1 00:11:50.249 00:11:50.249 verify_dump=1 00:11:50.249 verify_backlog=512 00:11:50.249 verify_state_save=0 00:11:50.249 do_verify=1 00:11:50.249 verify=crc32c-intel 00:11:50.249 [job0] 00:11:50.249 filename=/dev/nvme0n1 00:11:50.249 [job1] 00:11:50.249 filename=/dev/nvme0n2 00:11:50.249 [job2] 00:11:50.249 filename=/dev/nvme0n3 00:11:50.249 [job3] 00:11:50.249 filename=/dev/nvme0n4 00:11:50.249 Could not set queue depth (nvme0n1) 00:11:50.249 Could not set queue depth (nvme0n2) 00:11:50.249 Could not set queue depth (nvme0n3) 00:11:50.249 Could not set queue depth (nvme0n4) 00:11:50.515 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.515 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.515 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.516 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.516 fio-3.35 00:11:50.516 Starting 4 threads 00:11:51.907 00:11:51.907 job0: (groupid=0, jobs=1): err= 0: pid=813295: Wed Nov 20 12:25:57 2024 00:11:51.907 read: IOPS=642, BW=2571KiB/s (2633kB/s)(2584KiB/1005msec) 00:11:51.907 slat (nsec): min=7138, max=25251, avg=8562.16, stdev=2607.74 00:11:51.907 clat (usec): min=189, max=41240, avg=1251.58, stdev=6315.68 00:11:51.907 lat (usec): min=197, max=41249, avg=1260.14, stdev=6316.43 00:11:51.907 clat percentiles (usec): 00:11:51.907 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 223], 00:11:51.907 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:11:51.907 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 310], 00:11:51.907 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:51.907 | 99.99th=[41157] 00:11:51.907 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:11:51.907 slat (nsec): min=10016, max=39543, avg=11030.29, stdev=1469.17 00:11:51.907 clat (usec): min=112, max=334, avg=170.47, stdev=24.92 00:11:51.907 lat (usec): min=123, max=374, avg=181.50, stdev=25.28 00:11:51.907 clat percentiles (usec): 00:11:51.907 | 1.00th=[ 128], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:11:51.907 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:11:51.907 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 215], 00:11:51.907 | 99.00th=[ 233], 99.50th=[ 245], 99.90th=[ 306], 99.95th=[ 334], 00:11:51.907 | 99.99th=[ 334] 00:11:51.907 bw ( KiB/s): min= 8192, max= 8192, per=38.96%, avg=8192.00, stdev= 0.00, samples=1 00:11:51.907 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:51.907 lat (usec) : 250=84.07%, 500=14.91% 00:11:51.907 lat (msec) : 2=0.06%, 50=0.96% 00:11:51.907 cpu : usr=1.59%, sys=2.39%, ctx=1670, majf=0, minf=1 00:11:51.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.907 issued rwts: total=646,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.907 job1: (groupid=0, jobs=1): err= 0: pid=813296: Wed Nov 20 12:25:57 2024 00:11:51.907 read: IOPS=537, BW=2149KiB/s (2201kB/s)(2164KiB/1007msec) 00:11:51.907 slat (nsec): min=7058, max=25596, avg=8582.48, stdev=2879.76 00:11:51.907 clat (usec): min=170, max=41048, avg=1485.48, stdev=7085.32 00:11:51.907 lat (usec): min=177, max=41057, avg=1494.07, stdev=7086.28 00:11:51.907 clat percentiles (usec): 00:11:51.907 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:11:51.907 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:51.907 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 258], 00:11:51.907 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:51.907 | 99.99th=[41157] 00:11:51.907 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:11:51.907 slat (nsec): min=10045, max=36344, avg=11513.71, stdev=1882.32 00:11:51.907 clat (usec): min=131, max=320, avg=177.61, stdev=25.44 00:11:51.907 lat (usec): min=143, max=356, avg=189.12, stdev=25.53 00:11:51.907 clat percentiles (usec): 00:11:51.907 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:11:51.907 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 184], 00:11:51.907 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 219], 00:11:51.907 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 277], 99.95th=[ 322], 00:11:51.907 | 99.99th=[ 322] 00:11:51.907 bw ( KiB/s): min= 8192, max= 8192, per=38.96%, avg=8192.00, stdev= 0.00, samples=1 00:11:51.907 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:51.907 lat (usec) : 250=97.32%, 500=1.60% 00:11:51.907 lat (msec) : 50=1.09% 00:11:51.908 cpu : usr=1.59%, sys=2.29%, ctx=1565, majf=0, minf=1 00:11:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.908 issued rwts: total=541,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.908 job2: (groupid=0, jobs=1): err= 0: pid=813297: Wed Nov 20 12:25:57 2024 00:11:51.908 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:51.908 slat (nsec): min=6686, max=30875, avg=7605.09, stdev=953.48 00:11:51.908 clat (usec): min=153, max=451, avg=208.01, stdev=33.15 00:11:51.908 lat (usec): min=160, max=477, avg=215.61, stdev=33.17 00:11:51.908 clat percentiles (usec): 00:11:51.908 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:11:51.908 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 206], 00:11:51.908 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 265], 00:11:51.908 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 334], 99.95th=[ 445], 00:11:51.908 | 99.99th=[ 453] 00:11:51.908 write: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:11:51.908 slat (nsec): min=9520, max=37242, avg=10495.08, stdev=1175.51 00:11:51.908 clat (usec): min=101, max=474, avg=149.47, stdev=44.02 00:11:51.908 lat (usec): min=111, max=511, avg=159.96, stdev=44.16 00:11:51.908 clat percentiles (usec): 00:11:51.908 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 117], 00:11:51.908 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 135], 00:11:51.908 | 70.00th=[ 159], 80.00th=[ 196], 90.00th=[ 217], 95.00th=[ 247], 00:11:51.908 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 326], 00:11:51.908 | 99.99th=[ 474] 00:11:51.908 bw ( KiB/s): min=11896, max=11896, per=56.58%, avg=11896.00, stdev= 0.00, samples=1 00:11:51.908 iops : min= 2974, max= 2974, avg=2974.00, stdev= 0.00, samples=1 00:11:51.908 lat (usec) : 250=90.65%, 500=9.35% 00:11:51.908 cpu : usr=3.20%, sys=4.30%, ctx=5296, majf=0, minf=1 00:11:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.908 issued rwts: total=2560,2733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.908 job3: (groupid=0, jobs=1): err= 0: pid=813298: Wed Nov 20 12:25:57 2024 00:11:51.908 read: IOPS=259, BW=1038KiB/s (1063kB/s)(1044KiB/1006msec) 00:11:51.908 slat (nsec): min=6897, max=26785, avg=9006.74, stdev=4437.70 00:11:51.908 clat (usec): min=174, max=41152, avg=3494.57, stdev=11089.99 00:11:51.908 lat (usec): min=182, max=41162, avg=3503.57, stdev=11093.20 00:11:51.908 clat percentiles (usec): 00:11:51.908 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:11:51.908 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:11:51.908 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[41157], 00:11:51.908 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:51.908 | 99.99th=[41157] 00:11:51.908 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:11:51.908 slat (nsec): min=9799, max=65877, avg=10841.23, stdev=2670.49 00:11:51.908 clat (usec): min=137, max=382, avg=163.89, stdev=16.54 00:11:51.908 lat (usec): min=147, max=447, avg=174.73, stdev=18.23 00:11:51.908 clat percentiles (usec): 00:11:51.908 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:11:51.908 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:11:51.908 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 186], 00:11:51.908 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 383], 99.95th=[ 383], 00:11:51.908 | 99.99th=[ 383] 00:11:51.908 bw ( KiB/s): min= 4096, max= 4096, per=19.48%, avg=4096.00, stdev= 0.00, samples=1 00:11:51.908 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:51.908 lat (usec) : 250=93.66%, 500=3.62% 00:11:51.908 lat (msec) : 50=2.72% 00:11:51.908 cpu : usr=0.00%, sys=1.19%, ctx=774, majf=0, minf=1 00:11:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.908 issued rwts: total=261,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.908 00:11:51.908 Run status group 0 (all jobs): 00:11:51.908 READ: bw=15.5MiB/s (16.3MB/s), 1038KiB/s-9.99MiB/s (1063kB/s-10.5MB/s), io=15.7MiB (16.4MB), run=1001-1007msec 00:11:51.908 WRITE: bw=20.5MiB/s (21.5MB/s), 2036KiB/s-10.7MiB/s (2085kB/s-11.2MB/s), io=20.7MiB (21.7MB), run=1001-1007msec 00:11:51.908 00:11:51.908 Disk stats (read/write): 00:11:51.908 nvme0n1: ios=691/1024, merge=0/0, ticks=666/170, in_queue=836, util=86.87% 00:11:51.908 nvme0n2: ios=535/1024, merge=0/0, ticks=637/162, in_queue=799, util=86.89% 00:11:51.908 nvme0n3: ios=2072/2380, merge=0/0, ticks=1414/347, in_queue=1761, util=98.33% 00:11:51.908 nvme0n4: ios=300/512, merge=0/0, ticks=1657/83, in_queue=1740, util=97.59% 00:11:51.908 12:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:51.908 [global] 00:11:51.908 thread=1 00:11:51.908 invalidate=1 00:11:51.908 rw=write 00:11:51.908 time_based=1 00:11:51.908 runtime=1 00:11:51.908 ioengine=libaio 00:11:51.908 direct=1 00:11:51.908 bs=4096 00:11:51.908 iodepth=128 00:11:51.908 norandommap=0 00:11:51.908 numjobs=1 00:11:51.908 00:11:51.908 verify_dump=1 00:11:51.908 verify_backlog=512 00:11:51.908 verify_state_save=0 00:11:51.908 do_verify=1 00:11:51.908 verify=crc32c-intel 00:11:51.908 [job0] 00:11:51.908 filename=/dev/nvme0n1 00:11:51.908 [job1] 00:11:51.908 filename=/dev/nvme0n2 00:11:51.908 [job2] 00:11:51.908 filename=/dev/nvme0n3 00:11:51.908 [job3] 00:11:51.908 filename=/dev/nvme0n4 00:11:51.908 Could not set queue depth (nvme0n1) 00:11:51.908 Could not set queue depth (nvme0n2) 00:11:51.908 Could not set queue depth (nvme0n3) 00:11:51.908 Could not set queue depth (nvme0n4) 00:11:52.167 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.167 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.167 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.167 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.167 fio-3.35 00:11:52.167 Starting 4 threads 00:11:53.543 00:11:53.543 job0: (groupid=0, jobs=1): err= 0: pid=813723: Wed Nov 20 12:25:58 2024 00:11:53.543 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec) 00:11:53.543 slat (nsec): min=1311, max=12479k, avg=122995.29, stdev=826558.53 00:11:53.543 clat (usec): min=3735, max=37737, avg=14514.05, stdev=6213.95 00:11:53.543 lat (usec): min=3741, max=38888, avg=14637.05, stdev=6292.22 00:11:53.543 clat percentiles (usec): 00:11:53.543 | 1.00th=[ 6194], 5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[10028], 00:11:53.543 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:11:53.543 | 70.00th=[16712], 80.00th=[18744], 90.00th=[24249], 95.00th=[27657], 00:11:53.543 | 99.00th=[35390], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:11:53.543 | 99.99th=[37487] 00:11:53.543 write: IOPS=3479, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1018msec); 0 zone resets 00:11:53.543 slat (nsec): min=1949, max=25942k, avg=171292.02, stdev=989958.52 00:11:53.543 clat (usec): min=2544, max=82491, avg=22967.61, stdev=17554.13 00:11:53.543 lat (usec): min=2554, max=82503, avg=23138.90, stdev=17636.44 00:11:53.543 clat percentiles (usec): 00:11:53.543 | 1.00th=[ 4047], 5.00th=[ 6521], 10.00th=[ 7963], 20.00th=[ 8979], 00:11:53.543 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[21627], 60.00th=[22414], 00:11:53.543 | 70.00th=[23200], 80.00th=[33162], 90.00th=[50070], 95.00th=[64226], 00:11:53.543 | 99.00th=[80217], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:11:53.543 | 99.99th=[82314] 00:11:53.543 bw ( KiB/s): min=10928, max=16384, per=18.97%, avg=13656.00, stdev=3857.97, samples=2 00:11:53.543 iops : min= 2732, max= 4096, avg=3414.00, stdev=964.49, samples=2 00:11:53.543 lat (msec) : 4=0.57%, 10=27.71%, 20=34.03%, 50=32.34%, 100=5.34% 00:11:53.543 cpu : usr=2.75%, sys=4.03%, ctx=366, majf=0, minf=1 00:11:53.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:53.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.543 issued rwts: total=3072,3542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.543 job1: (groupid=0, jobs=1): err= 0: pid=813726: Wed Nov 20 12:25:58 2024 00:11:53.543 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:11:53.543 slat (nsec): min=1150, max=10343k, avg=73837.81, stdev=468288.62 00:11:53.543 clat (usec): min=4839, max=22161, avg=9596.55, stdev=1749.80 00:11:53.543 lat (usec): min=4858, max=28586, avg=9670.38, stdev=1802.79 00:11:53.543 clat percentiles (usec): 00:11:53.543 | 1.00th=[ 5604], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 7832], 00:11:53.543 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:11:53.543 | 70.00th=[10028], 80.00th=[10290], 90.00th=[12125], 95.00th=[12649], 00:11:53.543 | 99.00th=[13698], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:11:53.543 | 99.99th=[22152] 00:11:53.543 write: IOPS=4891, BW=19.1MiB/s (20.0MB/s)(19.3MiB/1008msec); 0 zone resets 00:11:53.543 slat (usec): min=2, max=70468, avg=128.34, stdev=1871.98 00:11:53.543 clat (msec): min=2, max=252, avg=13.34, stdev=20.73 00:11:53.543 lat (msec): min=4, max=252, avg=13.47, stdev=21.01 00:11:53.543 clat percentiles (msec): 00:11:53.543 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:11:53.543 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:11:53.543 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 13], 95.00th=[ 52], 00:11:53.543 | 99.00th=[ 125], 99.50th=[ 192], 99.90th=[ 253], 99.95th=[ 253], 00:11:53.543 | 99.99th=[ 253] 00:11:53.543 bw ( KiB/s): min=12288, max=26136, per=26.68%, avg=19212.00, stdev=9792.01, samples=2 00:11:53.543 iops : min= 3072, max= 6534, avg=4803.00, stdev=2448.00, samples=2 00:11:53.543 lat (msec) : 4=0.01%, 10=67.52%, 20=29.38%, 50=0.41%, 100=2.00% 00:11:53.543 lat (msec) : 250=0.60%, 500=0.07% 00:11:53.543 cpu : usr=4.67%, sys=5.36%, ctx=512, majf=0, minf=1 00:11:53.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:53.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.544 issued rwts: total=4608,4931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.544 job2: (groupid=0, jobs=1): err= 0: pid=813740: Wed Nov 20 12:25:58 2024 00:11:53.544 read: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1009msec) 00:11:53.544 slat (nsec): min=1036, max=16101k, avg=119262.37, stdev=877016.05 00:11:53.544 clat (usec): min=3308, max=73191, avg=13969.44, stdev=9883.04 00:11:53.544 lat (usec): min=3313, max=73198, avg=14088.70, stdev=9965.77 00:11:53.544 clat percentiles (usec): 00:11:53.544 | 1.00th=[ 4228], 5.00th=[ 4817], 10.00th=[ 8848], 20.00th=[10159], 00:11:53.544 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:11:53.544 | 70.00th=[13435], 80.00th=[15533], 90.00th=[18220], 95.00th=[27132], 00:11:53.544 | 99.00th=[65274], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:11:53.544 | 99.99th=[72877] 00:11:53.544 write: IOPS=4801, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1009msec); 0 zone resets 00:11:53.544 slat (nsec): min=1728, max=12952k, avg=84031.37, stdev=428384.20 00:11:53.544 clat (usec): min=235, max=73188, avg=13910.76, stdev=9850.70 00:11:53.544 lat (usec): min=585, max=73197, avg=13994.79, stdev=9891.94 00:11:53.544 clat percentiles (usec): 00:11:53.544 | 1.00th=[ 2114], 5.00th=[ 3064], 10.00th=[ 4424], 20.00th=[ 6783], 00:11:53.544 | 30.00th=[ 8979], 40.00th=[10683], 50.00th=[11338], 60.00th=[12125], 00:11:53.544 | 70.00th=[16712], 80.00th=[21103], 90.00th=[22414], 95.00th=[28705], 00:11:53.544 | 99.00th=[56361], 99.50th=[60031], 99.90th=[63701], 99.95th=[66847], 00:11:53.544 | 99.99th=[72877] 00:11:53.544 bw ( KiB/s): min=14016, max=23792, per=26.26%, avg=18904.00, stdev=6912.68, samples=2 00:11:53.544 iops : min= 3504, max= 5948, avg=4726.00, stdev=1728.17, samples=2 00:11:53.544 lat (usec) : 250=0.01%, 750=0.20%, 1000=0.02% 00:11:53.544 lat (msec) : 2=0.12%, 4=4.60%, 10=22.34%, 20=57.19%, 50=13.22% 00:11:53.544 lat (msec) : 100=2.29% 00:11:53.544 cpu : usr=3.17%, sys=5.26%, ctx=515, majf=0, minf=1 00:11:53.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:53.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.544 issued rwts: total=4343,4845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.544 job3: (groupid=0, jobs=1): err= 0: pid=813744: Wed Nov 20 12:25:58 2024 00:11:53.544 read: IOPS=4526, BW=17.7MiB/s (18.5MB/s)(18.0MiB/1018msec) 00:11:53.544 slat (nsec): min=936, max=24217k, avg=102726.35, stdev=844816.59 00:11:53.544 clat (usec): min=3664, max=63880, avg=13748.42, stdev=8186.04 00:11:53.544 lat (usec): min=3670, max=63889, avg=13851.15, stdev=8256.65 00:11:53.544 clat percentiles (usec): 00:11:53.544 | 1.00th=[ 4490], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[10028], 00:11:53.544 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:11:53.544 | 70.00th=[13173], 80.00th=[16581], 90.00th=[18744], 95.00th=[28443], 00:11:53.544 | 99.00th=[58459], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:11:53.544 | 99.99th=[63701] 00:11:53.544 write: IOPS=4916, BW=19.2MiB/s (20.1MB/s)(19.6MiB/1018msec); 0 zone resets 00:11:53.544 slat (nsec): min=1627, max=15856k, avg=87573.80, stdev=560537.80 00:11:53.544 clat (usec): min=408, max=70604, avg=13112.16, stdev=9076.24 00:11:53.544 lat (usec): min=480, max=70612, avg=13199.74, stdev=9137.01 00:11:53.544 clat percentiles (usec): 00:11:53.544 | 1.00th=[ 4047], 5.00th=[ 4948], 10.00th=[ 6325], 20.00th=[ 8094], 00:11:53.544 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11076], 60.00th=[11338], 00:11:53.544 | 70.00th=[11863], 80.00th=[16319], 90.00th=[22676], 95.00th=[26346], 00:11:53.544 | 99.00th=[60031], 99.50th=[65799], 99.90th=[70779], 99.95th=[70779], 00:11:53.544 | 99.99th=[70779] 00:11:53.544 bw ( KiB/s): min=18536, max=20480, per=27.10%, avg=19508.00, stdev=1374.62, samples=2 00:11:53.544 iops : min= 4634, max= 5120, avg=4877.00, stdev=343.65, samples=2 00:11:53.544 lat (usec) : 500=0.02% 00:11:53.544 lat (msec) : 4=0.51%, 10=25.54%, 20=62.55%, 50=9.56%, 100=1.82% 00:11:53.544 cpu : usr=2.75%, sys=4.72%, ctx=475, majf=0, minf=1 00:11:53.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:53.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.544 issued rwts: total=4608,5005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.544 00:11:53.544 Run status group 0 (all jobs): 00:11:53.544 READ: bw=63.8MiB/s (66.9MB/s), 11.8MiB/s-17.9MiB/s (12.4MB/s-18.7MB/s), io=65.0MiB (68.1MB), run=1008-1018msec 00:11:53.544 WRITE: bw=70.3MiB/s (73.7MB/s), 13.6MiB/s-19.2MiB/s (14.3MB/s-20.1MB/s), io=71.6MiB (75.1MB), run=1008-1018msec 00:11:53.544 00:11:53.544 Disk stats (read/write): 00:11:53.544 nvme0n1: ios=2579/3072, merge=0/0, ticks=36547/58052, in_queue=94599, util=98.70% 00:11:53.544 nvme0n2: ios=3102/3559, merge=0/0, ticks=15525/18076, in_queue=33601, util=97.23% 00:11:53.544 nvme0n3: ios=3584/4007, merge=0/0, ticks=45500/52236, in_queue=97736, util=87.51% 00:11:53.544 nvme0n4: ios=4087/4103, merge=0/0, ticks=53044/45926, in_queue=98970, util=89.16% 00:11:53.544 12:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:53.544 [global] 00:11:53.544 thread=1 00:11:53.544 invalidate=1 00:11:53.544 rw=randwrite 00:11:53.544 time_based=1 00:11:53.544 runtime=1 00:11:53.544 ioengine=libaio 00:11:53.544 direct=1 00:11:53.544 bs=4096 00:11:53.544 iodepth=128 00:11:53.544 norandommap=0 00:11:53.544 numjobs=1 00:11:53.544 00:11:53.544 verify_dump=1 00:11:53.544 verify_backlog=512 00:11:53.544 verify_state_save=0 00:11:53.544 do_verify=1 00:11:53.544 verify=crc32c-intel 00:11:53.544 [job0] 00:11:53.544 filename=/dev/nvme0n1 00:11:53.544 [job1] 00:11:53.544 filename=/dev/nvme0n2 00:11:53.544 [job2] 00:11:53.544 filename=/dev/nvme0n3 00:11:53.544 [job3] 00:11:53.544 filename=/dev/nvme0n4 00:11:53.544 Could not set queue depth (nvme0n1) 00:11:53.544 Could not set queue depth (nvme0n2) 00:11:53.544 Could not set queue depth (nvme0n3) 00:11:53.544 Could not set queue depth (nvme0n4) 00:11:53.802 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.802 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.802 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.802 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:53.802 fio-3.35 00:11:53.802 Starting 4 threads 00:11:55.179 00:11:55.179 job0: (groupid=0, jobs=1): err= 0: pid=814177: Wed Nov 20 12:26:00 2024 00:11:55.179 read: IOPS=5775, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1004msec) 00:11:55.179 slat (nsec): min=1236, max=5074.6k, avg=82399.07, stdev=476553.45 00:11:55.179 clat (usec): min=580, max=16551, avg=10208.90, stdev=1690.00 00:11:55.179 lat (usec): min=3732, max=16558, avg=10291.30, stdev=1718.32 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 6063], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 9372], 00:11:55.180 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:11:55.180 | 70.00th=[11076], 80.00th=[11600], 90.00th=[11994], 95.00th=[13042], 00:11:55.180 | 99.00th=[14746], 99.50th=[15401], 99.90th=[16057], 99.95th=[16319], 00:11:55.180 | 99.99th=[16581] 00:11:55.180 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:11:55.180 slat (usec): min=2, max=11145, avg=79.77, stdev=380.29 00:11:55.180 clat (usec): min=4535, max=39493, avg=11031.34, stdev=3980.50 00:11:55.180 lat (usec): min=4547, max=39496, avg=11111.11, stdev=4006.13 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 6456], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[ 9503], 00:11:55.180 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10683], 00:11:55.180 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12780], 95.00th=[13566], 00:11:55.180 | 99.00th=[36963], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:11:55.180 | 99.99th=[39584] 00:11:55.180 bw ( KiB/s): min=24526, max=24576, per=30.98%, avg=24551.00, stdev=35.36, samples=2 00:11:55.180 iops : min= 6131, max= 6144, avg=6137.50, stdev= 9.19, samples=2 00:11:55.180 lat (usec) : 750=0.01% 00:11:55.180 lat (msec) : 4=0.19%, 10=47.12%, 20=51.54%, 50=1.14% 00:11:55.180 cpu : usr=5.88%, sys=5.38%, ctx=729, majf=0, minf=1 00:11:55.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:55.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.180 issued rwts: total=5799,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.180 job1: (groupid=0, jobs=1): err= 0: pid=814194: Wed Nov 20 12:26:00 2024 00:11:55.180 read: IOPS=5912, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1006msec) 00:11:55.180 slat (nsec): min=956, max=10421k, avg=84524.41, stdev=481570.09 00:11:55.180 clat (usec): min=3006, max=21892, avg=10207.08, stdev=2113.55 00:11:55.180 lat (usec): min=3014, max=21900, avg=10291.60, stdev=2146.50 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 4686], 5.00th=[ 6390], 10.00th=[ 7832], 20.00th=[ 9372], 00:11:55.180 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10290], 00:11:55.180 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12256], 95.00th=[13435], 00:11:55.180 | 99.00th=[17695], 99.50th=[19792], 99.90th=[21890], 99.95th=[21890], 00:11:55.180 | 99.99th=[21890] 00:11:55.180 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:11:55.180 slat (nsec): min=1719, max=8545.7k, avg=71526.18, stdev=342567.60 00:11:55.180 clat (usec): min=376, max=54857, avg=10818.74, stdev=4073.43 00:11:55.180 lat (usec): min=487, max=54866, avg=10890.27, stdev=4083.59 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 4359], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[ 9634], 00:11:55.180 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10814], 00:11:55.180 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12518], 95.00th=[14222], 00:11:55.180 | 99.00th=[24249], 99.50th=[45351], 99.90th=[54264], 99.95th=[54789], 00:11:55.180 | 99.99th=[54789] 00:11:55.180 bw ( KiB/s): min=24336, max=24526, per=30.83%, avg=24431.00, stdev=134.35, samples=2 00:11:55.180 iops : min= 6084, max= 6131, avg=6107.50, stdev=33.23, samples=2 00:11:55.180 lat (usec) : 500=0.01%, 1000=0.01% 00:11:55.180 lat (msec) : 4=0.65%, 10=40.73%, 20=57.90%, 50=0.46%, 100=0.25% 00:11:55.180 cpu : usr=3.68%, sys=6.17%, ctx=769, majf=0, minf=1 00:11:55.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:55.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.180 issued rwts: total=5948,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.180 job2: (groupid=0, jobs=1): err= 0: pid=814214: Wed Nov 20 12:26:00 2024 00:11:55.180 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:11:55.180 slat (nsec): min=1278, max=15408k, avg=112363.35, stdev=837089.61 00:11:55.180 clat (usec): min=3295, max=37223, avg=14410.88, stdev=3888.80 00:11:55.180 lat (usec): min=3300, max=37227, avg=14523.25, stdev=3968.47 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 5735], 5.00th=[11207], 10.00th=[11863], 20.00th=[11994], 00:11:55.180 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13042], 60.00th=[13304], 00:11:55.180 | 70.00th=[14484], 80.00th=[17433], 90.00th=[20055], 95.00th=[21103], 00:11:55.180 | 99.00th=[27395], 99.50th=[30278], 99.90th=[36439], 99.95th=[36439], 00:11:55.180 | 99.99th=[36963] 00:11:55.180 write: IOPS=4536, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1010msec); 0 zone resets 00:11:55.180 slat (nsec): min=1864, max=13863k, avg=99396.91, stdev=610382.52 00:11:55.180 clat (usec): min=2559, max=48559, avg=15003.57, stdev=7929.51 00:11:55.180 lat (usec): min=2568, max=48567, avg=15102.96, stdev=7994.16 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 4178], 5.00th=[ 7635], 10.00th=[ 9503], 20.00th=[10290], 00:11:55.180 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:11:55.180 | 70.00th=[14484], 80.00th=[19006], 90.00th=[22676], 95.00th=[33162], 00:11:55.180 | 99.00th=[46400], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:11:55.180 | 99.99th=[48497] 00:11:55.180 bw ( KiB/s): min=16264, max=19376, per=22.49%, avg=17820.00, stdev=2200.52, samples=2 00:11:55.180 iops : min= 4066, max= 4844, avg=4455.00, stdev=550.13, samples=2 00:11:55.180 lat (msec) : 4=0.62%, 10=9.61%, 20=74.01%, 50=15.75% 00:11:55.180 cpu : usr=3.37%, sys=6.44%, ctx=402, majf=0, minf=1 00:11:55.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:55.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.180 issued rwts: total=4096,4582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.180 job3: (groupid=0, jobs=1): err= 0: pid=814220: Wed Nov 20 12:26:00 2024 00:11:55.180 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:11:55.180 slat (nsec): min=999, max=14933k, avg=124486.15, stdev=872825.37 00:11:55.180 clat (usec): min=5185, max=81227, avg=15535.00, stdev=8027.85 00:11:55.180 lat (usec): min=5199, max=88008, avg=15659.48, stdev=8096.53 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 5276], 5.00th=[ 7373], 10.00th=[ 9896], 20.00th=[11994], 00:11:55.180 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[14353], 00:11:55.180 | 70.00th=[15401], 80.00th=[16450], 90.00th=[21365], 95.00th=[32113], 00:11:55.180 | 99.00th=[51643], 99.50th=[54789], 99.90th=[65799], 99.95th=[81265], 00:11:55.180 | 99.99th=[81265] 00:11:55.180 write: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(12.3MiB/1009msec); 0 zone resets 00:11:55.180 slat (nsec): min=1748, max=21568k, avg=186573.31, stdev=1110362.16 00:11:55.180 clat (usec): min=1138, max=116969, avg=25591.91, stdev=23373.70 00:11:55.180 lat (usec): min=1148, max=116977, avg=25778.48, stdev=23522.88 00:11:55.180 clat percentiles (usec): 00:11:55.180 | 1.00th=[ 1958], 5.00th=[ 9372], 10.00th=[ 11600], 20.00th=[ 12387], 00:11:55.180 | 30.00th=[ 12911], 40.00th=[ 13566], 50.00th=[ 16581], 60.00th=[ 21365], 00:11:55.180 | 70.00th=[ 22676], 80.00th=[ 31065], 90.00th=[ 58983], 95.00th=[ 90702], 00:11:55.180 | 99.00th=[108528], 99.50th=[109577], 99.90th=[116917], 99.95th=[116917], 00:11:55.180 | 99.99th=[116917] 00:11:55.180 bw ( KiB/s): min=12288, max=12288, per=15.51%, avg=12288.00, stdev= 0.00, samples=2 00:11:55.180 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:55.180 lat (msec) : 2=0.56%, 4=0.39%, 10=7.59%, 20=63.12%, 50=21.89% 00:11:55.180 lat (msec) : 100=4.86%, 250=1.59% 00:11:55.180 cpu : usr=2.18%, sys=3.27%, ctx=332, majf=0, minf=2 00:11:55.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:55.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.180 issued rwts: total=3072,3137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.180 00:11:55.180 Run status group 0 (all jobs): 00:11:55.180 READ: bw=73.2MiB/s (76.7MB/s), 11.9MiB/s-23.1MiB/s (12.5MB/s-24.2MB/s), io=73.9MiB (77.5MB), run=1004-1010msec 00:11:55.180 WRITE: bw=77.4MiB/s (81.1MB/s), 12.1MiB/s-23.9MiB/s (12.7MB/s-25.1MB/s), io=78.2MiB (81.9MB), run=1004-1010msec 00:11:55.180 00:11:55.180 Disk stats (read/write): 00:11:55.180 nvme0n1: ios=5112/5120, merge=0/0, ticks=26904/26063, in_queue=52967, util=97.70% 00:11:55.180 nvme0n2: ios=5142/5295, merge=0/0, ticks=27217/28739, in_queue=55956, util=99.19% 00:11:55.180 nvme0n3: ios=3490/3584, merge=0/0, ticks=48666/54963, in_queue=103629, util=99.27% 00:11:55.180 nvme0n4: ios=2150/2521, merge=0/0, ticks=22214/41058, in_queue=63272, util=90.31% 00:11:55.180 12:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:55.180 12:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=814415 00:11:55.180 12:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:55.180 12:26:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:55.180 [global] 00:11:55.180 thread=1 00:11:55.180 invalidate=1 00:11:55.180 rw=read 00:11:55.180 time_based=1 00:11:55.180 runtime=10 00:11:55.180 ioengine=libaio 00:11:55.180 direct=1 00:11:55.180 bs=4096 00:11:55.180 iodepth=1 00:11:55.180 norandommap=1 00:11:55.180 numjobs=1 00:11:55.180 00:11:55.180 [job0] 00:11:55.180 filename=/dev/nvme0n1 00:11:55.180 [job1] 00:11:55.180 filename=/dev/nvme0n2 00:11:55.180 [job2] 00:11:55.180 filename=/dev/nvme0n3 00:11:55.180 [job3] 00:11:55.180 filename=/dev/nvme0n4 00:11:55.180 Could not set queue depth (nvme0n1) 00:11:55.180 Could not set queue depth (nvme0n2) 00:11:55.180 Could not set queue depth (nvme0n3) 00:11:55.180 Could not set queue depth (nvme0n4) 00:11:55.180 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.180 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.180 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.180 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.180 fio-3.35 00:11:55.181 Starting 4 threads 00:11:58.467 12:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:58.467 12:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:58.467 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:11:58.467 fio: pid=814726, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:58.467 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35045376, buflen=4096 00:11:58.467 fio: pid=814715, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:58.467 12:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:58.467 12:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:58.467 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51916800, buflen=4096 00:11:58.467 fio: pid=814664, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:58.467 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:58.467 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:58.727 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=55304192, buflen=4096 00:11:58.727 fio: pid=814688, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:58.727 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:58.727 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:58.727 00:11:58.727 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=814664: Wed Nov 20 12:26:04 2024 00:11:58.727 read: IOPS=4160, BW=16.2MiB/s (17.0MB/s)(49.5MiB/3047msec) 00:11:58.727 slat (usec): min=6, max=30602, avg=13.30, stdev=356.38 00:11:58.727 clat (usec): min=149, max=41073, avg=224.29, stdev=1024.61 00:11:58.727 lat (usec): min=156, max=41085, avg=237.59, stdev=1085.84 00:11:58.727 clat percentiles (usec): 00:11:58.727 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:11:58.727 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:11:58.727 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 231], 00:11:58.727 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 603], 99.95th=[41157], 00:11:58.727 | 99.99th=[41157] 00:11:58.727 bw ( KiB/s): min= 5664, max=20656, per=38.95%, avg=16700.80, stdev=6218.09, samples=5 00:11:58.727 iops : min= 1416, max= 5164, avg=4175.20, stdev=1554.52, samples=5 00:11:58.727 lat (usec) : 250=98.82%, 500=1.07%, 750=0.04% 00:11:58.727 lat (msec) : 2=0.01%, 50=0.06% 00:11:58.727 cpu : usr=1.15%, sys=3.71%, ctx=12684, majf=0, minf=1 00:11:58.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.727 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.727 issued rwts: total=12676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.727 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=814688: Wed Nov 20 12:26:04 2024 00:11:58.727 read: IOPS=4158, BW=16.2MiB/s (17.0MB/s)(52.7MiB/3247msec) 00:11:58.727 slat (usec): min=6, max=17323, avg=14.97, stdev=321.69 00:11:58.727 clat (usec): min=155, max=9675, avg=221.67, stdev=115.24 00:11:58.727 lat (usec): min=163, max=22876, avg=236.64, stdev=366.86 00:11:58.727 clat percentiles (usec): 00:11:58.727 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:11:58.727 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 225], 00:11:58.727 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:11:58.727 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 474], 99.95th=[ 857], 00:11:58.727 | 99.99th=[ 7242] 00:11:58.727 bw ( KiB/s): min=13985, max=18816, per=39.53%, avg=16946.83, stdev=2028.91, samples=6 00:11:58.727 iops : min= 3496, max= 4704, avg=4236.67, stdev=507.30, samples=6 00:11:58.727 lat (usec) : 250=84.86%, 500=15.06%, 750=0.02%, 1000=0.01% 00:11:58.727 lat (msec) : 2=0.02%, 10=0.02% 00:11:58.728 cpu : usr=2.50%, sys=6.44%, ctx=13510, majf=0, minf=2 00:11:58.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.728 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.728 issued rwts: total=13503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.728 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=814715: Wed Nov 20 12:26:04 2024 00:11:58.728 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(33.4MiB/2833msec) 00:11:58.728 slat (nsec): min=7314, max=42079, avg=8655.32, stdev=1467.44 00:11:58.728 clat (usec): min=174, max=41051, avg=318.06, stdev=1760.46 00:11:58.728 lat (usec): min=182, max=41076, avg=326.71, stdev=1761.12 00:11:58.728 clat percentiles (usec): 00:11:58.728 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:11:58.728 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:11:58.728 | 70.00th=[ 237], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 424], 00:11:58.728 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[41157], 99.95th=[41157], 00:11:58.728 | 99.99th=[41157] 00:11:58.728 bw ( KiB/s): min= 96, max=17256, per=28.18%, avg=12080.00, stdev=7042.22, samples=5 00:11:58.728 iops : min= 24, max= 4314, avg=3020.00, stdev=1760.55, samples=5 00:11:58.728 lat (usec) : 250=77.49%, 500=22.25%, 750=0.05% 00:11:58.728 lat (msec) : 2=0.01%, 50=0.19% 00:11:58.728 cpu : usr=1.84%, sys=4.77%, ctx=8560, majf=0, minf=2 00:11:58.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.728 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.728 issued rwts: total=8557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.728 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=814726: Wed Nov 20 12:26:04 2024 00:11:58.728 read: IOPS=25, BW=101KiB/s (104kB/s)(268KiB/2646msec) 00:11:58.728 slat (nsec): min=8322, max=35641, avg=12946.65, stdev=3965.20 00:11:58.728 clat (usec): min=262, max=41553, avg=39170.04, stdev=8446.25 00:11:58.728 lat (usec): min=284, max=41561, avg=39182.93, stdev=8443.37 00:11:58.728 clat percentiles (usec): 00:11:58.728 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:58.728 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:58.728 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:58.728 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:58.728 | 99.99th=[41681] 00:11:58.728 bw ( KiB/s): min= 96, max= 104, per=0.23%, avg=100.80, stdev= 4.38, samples=5 00:11:58.728 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:11:58.728 lat (usec) : 500=2.94%, 750=1.47% 00:11:58.728 lat (msec) : 50=94.12% 00:11:58.728 cpu : usr=0.08%, sys=0.00%, ctx=68, majf=0, minf=2 00:11:58.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.728 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.728 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:58.728 00:11:58.728 Run status group 0 (all jobs): 00:11:58.728 READ: bw=41.9MiB/s (43.9MB/s), 101KiB/s-16.2MiB/s (104kB/s-17.0MB/s), io=136MiB (143MB), run=2646-3247msec 00:11:58.728 00:11:58.728 Disk stats (read/write): 00:11:58.728 nvme0n1: ios=11922/0, merge=0/0, ticks=2636/0, in_queue=2636, util=93.62% 00:11:58.728 nvme0n2: ios=12885/0, merge=0/0, ticks=2714/0, in_queue=2714, util=93.13% 00:11:58.728 nvme0n3: ios=8557/0, merge=0/0, ticks=3327/0, in_queue=3327, util=99.45% 00:11:58.728 nvme0n4: ios=65/0, merge=0/0, ticks=2544/0, in_queue=2544, util=96.38% 00:11:58.988 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:58.988 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:58.988 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:58.988 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:59.246 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.246 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:59.506 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:59.506 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 814415 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:59.765 nvmf hotplug test: fio failed as expected 00:11:59.765 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.025 rmmod nvme_tcp 00:12:00.025 rmmod nvme_fabrics 00:12:00.025 rmmod nvme_keyring 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 811342 ']' 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 811342 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 811342 ']' 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 811342 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.025 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 811342 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 811342' 00:12:00.284 killing process with pid 811342 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 811342 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 811342 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.284 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.285 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.285 12:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.821 00:12:02.821 real 0m27.610s 00:12:02.821 user 2m2.961s 00:12:02.821 sys 0m9.072s 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.821 ************************************ 00:12:02.821 END TEST nvmf_fio_target 00:12:02.821 ************************************ 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.821 ************************************ 00:12:02.821 START TEST nvmf_bdevio 00:12:02.821 ************************************ 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:02.821 * Looking for test storage... 00:12:02.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.821 --rc genhtml_branch_coverage=1 00:12:02.821 --rc genhtml_function_coverage=1 00:12:02.821 --rc genhtml_legend=1 00:12:02.821 --rc geninfo_all_blocks=1 00:12:02.821 --rc geninfo_unexecuted_blocks=1 00:12:02.821 00:12:02.821 ' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.821 --rc genhtml_branch_coverage=1 00:12:02.821 --rc genhtml_function_coverage=1 00:12:02.821 --rc genhtml_legend=1 00:12:02.821 --rc geninfo_all_blocks=1 00:12:02.821 --rc geninfo_unexecuted_blocks=1 00:12:02.821 00:12:02.821 ' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.821 --rc genhtml_branch_coverage=1 00:12:02.821 --rc genhtml_function_coverage=1 00:12:02.821 --rc genhtml_legend=1 00:12:02.821 --rc geninfo_all_blocks=1 00:12:02.821 --rc geninfo_unexecuted_blocks=1 00:12:02.821 00:12:02.821 ' 00:12:02.821 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.821 --rc genhtml_branch_coverage=1 00:12:02.821 --rc genhtml_function_coverage=1 00:12:02.821 --rc genhtml_legend=1 00:12:02.822 --rc geninfo_all_blocks=1 00:12:02.822 --rc geninfo_unexecuted_blocks=1 00:12:02.822 00:12:02.822 ' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.822 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.391 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:12:09.392 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:12:09.392 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:12:09.392 Found net devices under 0000:1a:00.0: cvl_0_0 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:12:09.392 Found net devices under 0000:1a:00.1: cvl_0_1 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:12:09.392 00:12:09.392 --- 10.0.0.2 ping statistics --- 00:12:09.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.392 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:12:09.392 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:09.392 00:12:09.392 --- 10.0.0.1 ping statistics --- 00:12:09.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.392 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=819395 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 819395 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 819395 ']' 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.393 12:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.393 [2024-11-20 12:26:14.555753] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:09.393 [2024-11-20 12:26:14.555796] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.393 [2024-11-20 12:26:14.634671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.393 [2024-11-20 12:26:14.672654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.393 [2024-11-20 12:26:14.672691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.393 [2024-11-20 12:26:14.672697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.393 [2024-11-20 12:26:14.672702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.393 [2024-11-20 12:26:14.672706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.393 [2024-11-20 12:26:14.674294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:09.393 [2024-11-20 12:26:14.674425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:09.393 [2024-11-20 12:26:14.674523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.393 [2024-11-20 12:26:14.674523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.652 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.652 [2024-11-20 12:26:15.412947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 Malloc0 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 [2024-11-20 12:26:15.473986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:09.911 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:09.911 { 00:12:09.911 "params": { 00:12:09.911 "name": "Nvme$subsystem", 00:12:09.911 "trtype": "$TEST_TRANSPORT", 00:12:09.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.911 "adrfam": "ipv4", 00:12:09.911 "trsvcid": "$NVMF_PORT", 00:12:09.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.911 "hdgst": ${hdgst:-false}, 00:12:09.911 "ddgst": ${ddgst:-false} 00:12:09.911 }, 00:12:09.911 "method": "bdev_nvme_attach_controller" 00:12:09.911 } 00:12:09.911 EOF 00:12:09.912 )") 00:12:09.912 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:09.912 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:09.912 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:09.912 12:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:09.912 "params": { 00:12:09.912 "name": "Nvme1", 00:12:09.912 "trtype": "tcp", 00:12:09.912 "traddr": "10.0.0.2", 00:12:09.912 "adrfam": "ipv4", 00:12:09.912 "trsvcid": "4420", 00:12:09.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.912 "hdgst": false, 00:12:09.912 "ddgst": false 00:12:09.912 }, 00:12:09.912 "method": "bdev_nvme_attach_controller" 00:12:09.912 }' 00:12:09.912 [2024-11-20 12:26:15.523186] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:09.912 [2024-11-20 12:26:15.523225] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819442 ] 00:12:09.912 [2024-11-20 12:26:15.598470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.912 [2024-11-20 12:26:15.638891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.912 [2024-11-20 12:26:15.639003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.912 [2024-11-20 12:26:15.639004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.170 I/O targets: 00:12:10.170 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:10.170 00:12:10.170 00:12:10.170 CUnit - A unit testing framework for C - Version 2.1-3 00:12:10.170 http://cunit.sourceforge.net/ 00:12:10.170 00:12:10.170 00:12:10.170 Suite: bdevio tests on: Nvme1n1 00:12:10.170 Test: blockdev write read block ...passed 00:12:10.170 Test: blockdev write zeroes read block ...passed 00:12:10.170 Test: blockdev write zeroes read no split ...passed 00:12:10.429 Test: blockdev write zeroes read split ...passed 00:12:10.429 Test: blockdev write zeroes read split partial ...passed 00:12:10.429 Test: blockdev reset ...[2024-11-20 12:26:15.989981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:10.429 [2024-11-20 12:26:15.990043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662780 (9): Bad file descriptor 00:12:10.429 [2024-11-20 12:26:16.041741] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:10.429 passed 00:12:10.429 Test: blockdev write read 8 blocks ...passed 00:12:10.429 Test: blockdev write read size > 128k ...passed 00:12:10.429 Test: blockdev write read invalid size ...passed 00:12:10.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.429 Test: blockdev write read max offset ...passed 00:12:10.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.688 Test: blockdev writev readv 8 blocks ...passed 00:12:10.688 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.688 Test: blockdev writev readv block ...passed 00:12:10.688 Test: blockdev writev readv size > 128k ...passed 00:12:10.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.688 Test: blockdev comparev and writev ...[2024-11-20 12:26:16.290823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.290851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.290865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.290872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.291078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.291087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.291097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.291103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.291303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.291312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.291323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.291330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.291547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.291568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:10.688 [2024-11-20 12:26:16.291579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:10.688 passed 00:12:10.688 Test: blockdev nvme passthru rw ...passed 00:12:10.688 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:26:16.373672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.688 [2024-11-20 12:26:16.373687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.373780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.688 [2024-11-20 12:26:16.373789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.373878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.688 [2024-11-20 12:26:16.373886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:10.688 [2024-11-20 12:26:16.373982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:10.688 [2024-11-20 12:26:16.373991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:10.688 passed 00:12:10.688 Test: blockdev nvme admin passthru ...passed 00:12:10.688 Test: blockdev copy ...passed 00:12:10.688 00:12:10.688 Run Summary: Type Total Ran Passed Failed Inactive 00:12:10.688 suites 1 1 n/a 0 0 00:12:10.688 tests 23 23 23 0 0 00:12:10.688 asserts 152 152 152 0 n/a 00:12:10.688 00:12:10.688 Elapsed time = 1.204 seconds 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.947 rmmod nvme_tcp 00:12:10.947 rmmod nvme_fabrics 00:12:10.947 rmmod nvme_keyring 00:12:10.947 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 819395 ']' 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 819395 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 819395 ']' 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 819395 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 819395 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 819395' 00:12:10.948 killing process with pid 819395 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 819395 00:12:10.948 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 819395 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.207 12:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.766 12:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.766 00:12:13.766 real 0m10.844s 00:12:13.766 user 0m12.723s 00:12:13.766 sys 0m5.228s 00:12:13.766 12:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.766 12:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:13.766 ************************************ 00:12:13.766 END TEST nvmf_bdevio 00:12:13.766 ************************************ 00:12:13.766 12:26:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:13.766 00:12:13.766 real 4m45.413s 00:12:13.766 user 10m55.882s 00:12:13.766 sys 1m37.656s 00:12:13.766 12:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.766 12:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.766 ************************************ 00:12:13.766 END TEST nvmf_target_core 00:12:13.766 ************************************ 00:12:13.766 12:26:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:13.766 12:26:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.766 12:26:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.766 12:26:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.766 ************************************ 00:12:13.766 START TEST nvmf_target_extra 00:12:13.766 ************************************ 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:13.766 * Looking for test storage... 00:12:13.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.766 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:13.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.767 --rc genhtml_branch_coverage=1 00:12:13.767 --rc genhtml_function_coverage=1 00:12:13.767 --rc genhtml_legend=1 00:12:13.767 --rc geninfo_all_blocks=1 00:12:13.767 --rc geninfo_unexecuted_blocks=1 00:12:13.767 00:12:13.767 ' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:13.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.767 --rc genhtml_branch_coverage=1 00:12:13.767 --rc genhtml_function_coverage=1 00:12:13.767 --rc genhtml_legend=1 00:12:13.767 --rc geninfo_all_blocks=1 00:12:13.767 --rc geninfo_unexecuted_blocks=1 00:12:13.767 00:12:13.767 ' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:13.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.767 --rc genhtml_branch_coverage=1 00:12:13.767 --rc genhtml_function_coverage=1 00:12:13.767 --rc genhtml_legend=1 00:12:13.767 --rc geninfo_all_blocks=1 00:12:13.767 --rc geninfo_unexecuted_blocks=1 00:12:13.767 00:12:13.767 ' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:13.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.767 --rc genhtml_branch_coverage=1 00:12:13.767 --rc genhtml_function_coverage=1 00:12:13.767 --rc genhtml_legend=1 00:12:13.767 --rc geninfo_all_blocks=1 00:12:13.767 --rc geninfo_unexecuted_blocks=1 00:12:13.767 00:12:13.767 ' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:13.767 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.768 ************************************ 00:12:13.768 START TEST nvmf_example 00:12:13.768 ************************************ 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:13.768 * Looking for test storage... 00:12:13.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:13.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.768 --rc genhtml_branch_coverage=1 00:12:13.768 --rc genhtml_function_coverage=1 00:12:13.768 --rc genhtml_legend=1 00:12:13.768 --rc geninfo_all_blocks=1 00:12:13.768 --rc geninfo_unexecuted_blocks=1 00:12:13.768 00:12:13.768 ' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:13.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.768 --rc genhtml_branch_coverage=1 00:12:13.768 --rc genhtml_function_coverage=1 00:12:13.768 --rc genhtml_legend=1 00:12:13.768 --rc geninfo_all_blocks=1 00:12:13.768 --rc geninfo_unexecuted_blocks=1 00:12:13.768 00:12:13.768 ' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:13.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.768 --rc genhtml_branch_coverage=1 00:12:13.768 --rc genhtml_function_coverage=1 00:12:13.768 --rc genhtml_legend=1 00:12:13.768 --rc geninfo_all_blocks=1 00:12:13.768 --rc geninfo_unexecuted_blocks=1 00:12:13.768 00:12:13.768 ' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:13.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.768 --rc genhtml_branch_coverage=1 00:12:13.768 --rc genhtml_function_coverage=1 00:12:13.768 --rc genhtml_legend=1 00:12:13.768 --rc geninfo_all_blocks=1 00:12:13.768 --rc geninfo_unexecuted_blocks=1 00:12:13.768 00:12:13.768 ' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.768 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.769 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:12:20.423 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:12:20.423 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.423 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:12:20.424 Found net devices under 0000:1a:00.0: cvl_0_0 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:12:20.424 Found net devices under 0000:1a:00.1: cvl_0_1 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:12:20.424 00:12:20.424 --- 10.0.0.2 ping statistics --- 00:12:20.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.424 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:20.424 00:12:20.424 --- 10.0.0.1 ping statistics --- 00:12:20.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.424 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=823541 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 823541 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 823541 ']' 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.424 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.991 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:20.992 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:33.199 Initializing NVMe Controllers 00:12:33.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:33.199 Initialization complete. Launching workers. 00:12:33.199 ======================================================== 00:12:33.199 Latency(us) 00:12:33.199 Device Information : IOPS MiB/s Average min max 00:12:33.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19629.70 76.68 3260.19 441.31 16172.88 00:12:33.199 ======================================================== 00:12:33.199 Total : 19629.70 76.68 3260.19 441.31 16172.88 00:12:33.199 00:12:33.199 [2024-11-20 12:26:36.857354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ecd30 is same with the state(6) to be set 00:12:33.199 [2024-11-20 12:26:36.857432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ecd30 is same with the state(6) to be set 00:12:33.199 [2024-11-20 12:26:36.857440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ecd30 is same with the state(6) to be set 00:12:33.199 [2024-11-20 12:26:36.857446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ecd30 is same with the state(6) to be set 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.199 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.199 rmmod nvme_tcp 00:12:33.199 rmmod nvme_fabrics 00:12:33.199 rmmod nvme_keyring 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 823541 ']' 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 823541 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 823541 ']' 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 823541 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823541 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823541' 00:12:33.200 killing process with pid 823541 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 823541 00:12:33.200 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 823541 00:12:33.200 nvmf threads initialize successfully 00:12:33.200 bdev subsystem init successfully 00:12:33.200 created a nvmf target service 00:12:33.200 create targets's poll groups done 00:12:33.200 all subsystems of target started 00:12:33.200 nvmf target is running 00:12:33.200 all subsystems of target stopped 00:12:33.200 destroy targets's poll groups done 00:12:33.200 destroyed the nvmf target service 00:12:33.200 bdev subsystem finish successfully 00:12:33.200 nvmf threads destroy successfully 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.200 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.768 00:12:33.768 real 0m19.995s 00:12:33.768 user 0m46.128s 00:12:33.768 sys 0m5.939s 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.768 ************************************ 00:12:33.768 END TEST nvmf_example 00:12:33.768 ************************************ 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.768 ************************************ 00:12:33.768 START TEST nvmf_filesystem 00:12:33.768 ************************************ 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:33.768 * Looking for test storage... 00:12:33.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.768 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.769 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.032 --rc genhtml_branch_coverage=1 00:12:34.032 --rc genhtml_function_coverage=1 00:12:34.032 --rc genhtml_legend=1 00:12:34.032 --rc geninfo_all_blocks=1 00:12:34.032 --rc geninfo_unexecuted_blocks=1 00:12:34.032 00:12:34.032 ' 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.032 --rc genhtml_branch_coverage=1 00:12:34.032 --rc genhtml_function_coverage=1 00:12:34.032 --rc genhtml_legend=1 00:12:34.032 --rc geninfo_all_blocks=1 00:12:34.032 --rc geninfo_unexecuted_blocks=1 00:12:34.032 00:12:34.032 ' 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.032 --rc genhtml_branch_coverage=1 00:12:34.032 --rc genhtml_function_coverage=1 00:12:34.032 --rc genhtml_legend=1 00:12:34.032 --rc geninfo_all_blocks=1 00:12:34.032 --rc geninfo_unexecuted_blocks=1 00:12:34.032 00:12:34.032 ' 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.032 --rc genhtml_branch_coverage=1 00:12:34.032 --rc genhtml_function_coverage=1 00:12:34.032 --rc genhtml_legend=1 00:12:34.032 --rc geninfo_all_blocks=1 00:12:34.032 --rc geninfo_unexecuted_blocks=1 00:12:34.032 00:12:34.032 ' 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:34.032 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:34.033 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:34.034 #define SPDK_CONFIG_H 00:12:34.034 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:34.034 #define SPDK_CONFIG_APPS 1 00:12:34.034 #define SPDK_CONFIG_ARCH native 00:12:34.034 #undef SPDK_CONFIG_ASAN 00:12:34.034 #undef SPDK_CONFIG_AVAHI 00:12:34.034 #undef SPDK_CONFIG_CET 00:12:34.034 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:34.034 #define SPDK_CONFIG_COVERAGE 1 00:12:34.034 #define SPDK_CONFIG_CROSS_PREFIX 00:12:34.034 #undef SPDK_CONFIG_CRYPTO 00:12:34.034 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:34.034 #undef SPDK_CONFIG_CUSTOMOCF 00:12:34.034 #undef SPDK_CONFIG_DAOS 00:12:34.034 #define SPDK_CONFIG_DAOS_DIR 00:12:34.034 #define SPDK_CONFIG_DEBUG 1 00:12:34.034 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:34.034 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:34.034 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:34.034 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:34.034 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:34.034 #undef SPDK_CONFIG_DPDK_UADK 00:12:34.034 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:34.034 #define SPDK_CONFIG_EXAMPLES 1 00:12:34.034 #undef SPDK_CONFIG_FC 00:12:34.034 #define SPDK_CONFIG_FC_PATH 00:12:34.034 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:34.034 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:34.034 #define SPDK_CONFIG_FSDEV 1 00:12:34.034 #undef SPDK_CONFIG_FUSE 00:12:34.034 #undef SPDK_CONFIG_FUZZER 00:12:34.034 #define SPDK_CONFIG_FUZZER_LIB 00:12:34.034 #undef SPDK_CONFIG_GOLANG 00:12:34.034 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:34.034 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:34.034 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:34.034 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:34.034 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:34.034 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:34.034 #undef SPDK_CONFIG_HAVE_LZ4 00:12:34.034 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:34.034 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:34.034 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:34.034 #define SPDK_CONFIG_IDXD 1 00:12:34.034 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:34.034 #undef SPDK_CONFIG_IPSEC_MB 00:12:34.034 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:34.034 #define SPDK_CONFIG_ISAL 1 00:12:34.034 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:34.034 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:34.034 #define SPDK_CONFIG_LIBDIR 00:12:34.034 #undef SPDK_CONFIG_LTO 00:12:34.034 #define SPDK_CONFIG_MAX_LCORES 128 00:12:34.034 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:34.034 #define SPDK_CONFIG_NVME_CUSE 1 00:12:34.034 #undef SPDK_CONFIG_OCF 00:12:34.034 #define SPDK_CONFIG_OCF_PATH 00:12:34.034 #define SPDK_CONFIG_OPENSSL_PATH 00:12:34.034 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:34.034 #define SPDK_CONFIG_PGO_DIR 00:12:34.034 #undef SPDK_CONFIG_PGO_USE 00:12:34.034 #define SPDK_CONFIG_PREFIX /usr/local 00:12:34.034 #undef SPDK_CONFIG_RAID5F 00:12:34.034 #undef SPDK_CONFIG_RBD 00:12:34.034 #define SPDK_CONFIG_RDMA 1 00:12:34.034 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:34.034 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:34.034 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:34.034 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:34.034 #define SPDK_CONFIG_SHARED 1 00:12:34.034 #undef SPDK_CONFIG_SMA 00:12:34.034 #define SPDK_CONFIG_TESTS 1 00:12:34.034 #undef SPDK_CONFIG_TSAN 00:12:34.034 #define SPDK_CONFIG_UBLK 1 00:12:34.034 #define SPDK_CONFIG_UBSAN 1 00:12:34.034 #undef SPDK_CONFIG_UNIT_TESTS 00:12:34.034 #undef SPDK_CONFIG_URING 00:12:34.034 #define SPDK_CONFIG_URING_PATH 00:12:34.034 #undef SPDK_CONFIG_URING_ZNS 00:12:34.034 #undef SPDK_CONFIG_USDT 00:12:34.034 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:34.034 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:34.034 #define SPDK_CONFIG_VFIO_USER 1 00:12:34.034 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:34.034 #define SPDK_CONFIG_VHOST 1 00:12:34.034 #define SPDK_CONFIG_VIRTIO 1 00:12:34.034 #undef SPDK_CONFIG_VTUNE 00:12:34.034 #define SPDK_CONFIG_VTUNE_DIR 00:12:34.034 #define SPDK_CONFIG_WERROR 1 00:12:34.034 #define SPDK_CONFIG_WPDK_DIR 00:12:34.034 #undef SPDK_CONFIG_XNVME 00:12:34.034 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.034 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:34.035 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:34.036 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 826108 ]] 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 826108 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XJgxej 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XJgxej/tests/target /tmp/spdk.XJgxej 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:34.037 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=83947151360 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=94489763840 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10542612480 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47233515520 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47244881920 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=18874818560 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=18897952768 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23134208 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=46170648576 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47244881920 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074233344 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9448964096 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9448976384 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:34.038 * Looking for test storage... 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=83947151360 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=12757204992 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:34.038 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.039 --rc genhtml_branch_coverage=1 00:12:34.039 --rc genhtml_function_coverage=1 00:12:34.039 --rc genhtml_legend=1 00:12:34.039 --rc geninfo_all_blocks=1 00:12:34.039 --rc geninfo_unexecuted_blocks=1 00:12:34.039 00:12:34.039 ' 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.039 --rc genhtml_branch_coverage=1 00:12:34.039 --rc genhtml_function_coverage=1 00:12:34.039 --rc genhtml_legend=1 00:12:34.039 --rc geninfo_all_blocks=1 00:12:34.039 --rc geninfo_unexecuted_blocks=1 00:12:34.039 00:12:34.039 ' 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.039 --rc genhtml_branch_coverage=1 00:12:34.039 --rc genhtml_function_coverage=1 00:12:34.039 --rc genhtml_legend=1 00:12:34.039 --rc geninfo_all_blocks=1 00:12:34.039 --rc geninfo_unexecuted_blocks=1 00:12:34.039 00:12:34.039 ' 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.039 --rc genhtml_branch_coverage=1 00:12:34.039 --rc genhtml_function_coverage=1 00:12:34.039 --rc genhtml_legend=1 00:12:34.039 --rc geninfo_all_blocks=1 00:12:34.039 --rc geninfo_unexecuted_blocks=1 00:12:34.039 00:12:34.039 ' 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.039 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.300 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:12:40.877 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:12:40.877 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.877 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:12:40.878 Found net devices under 0000:1a:00.0: cvl_0_0 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:12:40.878 Found net devices under 0000:1a:00.1: cvl_0_1 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:12:40.878 00:12:40.878 --- 10.0.0.2 ping statistics --- 00:12:40.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.878 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:12:40.878 00:12:40.878 --- 10.0.0.1 ping statistics --- 00:12:40.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.878 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.878 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:40.878 ************************************ 00:12:40.878 START TEST nvmf_filesystem_no_in_capsule 00:12:40.878 ************************************ 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=829484 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 829484 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 829484 ']' 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.878 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.878 [2024-11-20 12:26:46.134819] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:40.878 [2024-11-20 12:26:46.134856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.878 [2024-11-20 12:26:46.210726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.878 [2024-11-20 12:26:46.250324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.878 [2024-11-20 12:26:46.250359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.878 [2024-11-20 12:26:46.250367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.878 [2024-11-20 12:26:46.250372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.878 [2024-11-20 12:26:46.250377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.878 [2024-11-20 12:26:46.255430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.878 [2024-11-20 12:26:46.255468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.878 [2024-11-20 12:26:46.255582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.878 [2024-11-20 12:26:46.255583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.447 [2024-11-20 12:26:46.992967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:41.447 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 Malloc1 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 [2024-11-20 12:26:47.135479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:41.448 { 00:12:41.448 "name": "Malloc1", 00:12:41.448 "aliases": [ 00:12:41.448 "8b7cc5a1-0e90-441a-827e-94182f06534c" 00:12:41.448 ], 00:12:41.448 "product_name": "Malloc disk", 00:12:41.448 "block_size": 512, 00:12:41.448 "num_blocks": 1048576, 00:12:41.448 "uuid": "8b7cc5a1-0e90-441a-827e-94182f06534c", 00:12:41.448 "assigned_rate_limits": { 00:12:41.448 "rw_ios_per_sec": 0, 00:12:41.448 "rw_mbytes_per_sec": 0, 00:12:41.448 "r_mbytes_per_sec": 0, 00:12:41.448 "w_mbytes_per_sec": 0 00:12:41.448 }, 00:12:41.448 "claimed": true, 00:12:41.448 "claim_type": "exclusive_write", 00:12:41.448 "zoned": false, 00:12:41.448 "supported_io_types": { 00:12:41.448 "read": true, 00:12:41.448 "write": true, 00:12:41.448 "unmap": true, 00:12:41.448 "flush": true, 00:12:41.448 "reset": true, 00:12:41.448 "nvme_admin": false, 00:12:41.448 "nvme_io": false, 00:12:41.448 "nvme_io_md": false, 00:12:41.448 "write_zeroes": true, 00:12:41.448 "zcopy": true, 00:12:41.448 "get_zone_info": false, 00:12:41.448 "zone_management": false, 00:12:41.448 "zone_append": false, 00:12:41.448 "compare": false, 00:12:41.448 "compare_and_write": false, 00:12:41.448 "abort": true, 00:12:41.448 "seek_hole": false, 00:12:41.448 "seek_data": false, 00:12:41.448 "copy": true, 00:12:41.448 "nvme_iov_md": false 00:12:41.448 }, 00:12:41.448 "memory_domains": [ 00:12:41.448 { 00:12:41.448 "dma_device_id": "system", 00:12:41.448 "dma_device_type": 1 00:12:41.448 }, 00:12:41.448 { 00:12:41.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.448 "dma_device_type": 2 00:12:41.448 } 00:12:41.448 ], 00:12:41.448 "driver_specific": {} 00:12:41.448 } 00:12:41.448 ]' 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:41.448 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:41.708 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:41.708 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:41.708 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:41.708 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:41.708 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.085 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.085 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:43.085 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.085 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:43.085 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:44.994 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:45.253 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:45.512 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:46.448 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:46.448 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:46.448 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:46.448 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.449 ************************************ 00:12:46.449 START TEST filesystem_ext4 00:12:46.449 ************************************ 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:46.449 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:46.449 mke2fs 1.47.0 (5-Feb-2023) 00:12:46.449 Discarding device blocks: 0/522240 done 00:12:46.449 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:46.449 Filesystem UUID: 05befde2-fab7-4702-b396-226485f03a24 00:12:46.449 Superblock backups stored on blocks: 00:12:46.449 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:46.449 00:12:46.449 Allocating group tables: 0/64 done 00:12:46.449 Writing inode tables: 0/64 done 00:12:46.707 Creating journal (8192 blocks): done 00:12:47.843 Writing superblocks and filesystem accounting information: 0/64 done 00:12:47.843 00:12:47.843 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:47.843 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 829484 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:54.415 00:12:54.415 real 0m7.486s 00:12:54.415 user 0m0.019s 00:12:54.415 sys 0m0.081s 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:54.415 ************************************ 00:12:54.415 END TEST filesystem_ext4 00:12:54.415 ************************************ 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.415 ************************************ 00:12:54.415 START TEST filesystem_btrfs 00:12:54.415 ************************************ 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:54.415 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:54.415 btrfs-progs v6.8.1 00:12:54.415 See https://btrfs.readthedocs.io for more information. 00:12:54.415 00:12:54.415 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:54.415 NOTE: several default settings have changed in version 5.15, please make sure 00:12:54.415 this does not affect your deployments: 00:12:54.415 - DUP for metadata (-m dup) 00:12:54.415 - enabled no-holes (-O no-holes) 00:12:54.415 - enabled free-space-tree (-R free-space-tree) 00:12:54.415 00:12:54.415 Label: (null) 00:12:54.415 UUID: 934190f2-4ddb-497a-a140-d33644301e13 00:12:54.415 Node size: 16384 00:12:54.415 Sector size: 4096 (CPU page size: 4096) 00:12:54.415 Filesystem size: 510.00MiB 00:12:54.415 Block group profiles: 00:12:54.415 Data: single 8.00MiB 00:12:54.415 Metadata: DUP 32.00MiB 00:12:54.415 System: DUP 8.00MiB 00:12:54.415 SSD detected: yes 00:12:54.415 Zoned device: no 00:12:54.415 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:54.415 Checksum: crc32c 00:12:54.415 Number of devices: 1 00:12:54.415 Devices: 00:12:54.415 ID SIZE PATH 00:12:54.415 1 510.00MiB /dev/nvme0n1p1 00:12:54.415 00:12:54.415 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:54.415 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 829484 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:54.678 00:12:54.678 real 0m0.656s 00:12:54.678 user 0m0.030s 00:12:54.678 sys 0m0.111s 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:54.678 ************************************ 00:12:54.678 END TEST filesystem_btrfs 00:12:54.678 ************************************ 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.678 ************************************ 00:12:54.678 START TEST filesystem_xfs 00:12:54.678 ************************************ 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:54.678 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:54.938 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:54.938 = sectsz=512 attr=2, projid32bit=1 00:12:54.938 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:54.938 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:54.938 data = bsize=4096 blocks=130560, imaxpct=25 00:12:54.938 = sunit=0 swidth=0 blks 00:12:54.938 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:54.938 log =internal log bsize=4096 blocks=16384, version=2 00:12:54.938 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:54.938 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:55.874 Discarding blocks...Done. 00:12:55.874 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:55.874 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:58.408 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:58.408 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:58.408 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:58.408 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 829484 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:58.408 00:12:58.408 real 0m3.667s 00:12:58.408 user 0m0.028s 00:12:58.408 sys 0m0.071s 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:58.408 ************************************ 00:12:58.408 END TEST filesystem_xfs 00:12:58.408 ************************************ 00:12:58.408 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 829484 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 829484 ']' 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 829484 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829484 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829484' 00:12:58.667 killing process with pid 829484 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 829484 00:12:58.667 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 829484 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:59.236 00:12:59.236 real 0m18.646s 00:12:59.236 user 1m13.571s 00:12:59.236 sys 0m1.435s 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.236 ************************************ 00:12:59.236 END TEST nvmf_filesystem_no_in_capsule 00:12:59.236 ************************************ 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.236 ************************************ 00:12:59.236 START TEST nvmf_filesystem_in_capsule 00:12:59.236 ************************************ 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=833168 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 833168 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 833168 ']' 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.236 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.236 [2024-11-20 12:27:04.856767] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:59.236 [2024-11-20 12:27:04.856812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.236 [2024-11-20 12:27:04.935788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.236 [2024-11-20 12:27:04.972967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.236 [2024-11-20 12:27:04.973005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.236 [2024-11-20 12:27:04.973012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.236 [2024-11-20 12:27:04.973019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.236 [2024-11-20 12:27:04.973024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.236 [2024-11-20 12:27:04.974542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.236 [2024-11-20 12:27:04.974652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.236 [2024-11-20 12:27:04.974740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.236 [2024-11-20 12:27:04.974741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.495 [2024-11-20 12:27:05.118007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.495 Malloc1 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.495 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.496 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.496 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.755 [2024-11-20 12:27:05.264235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:59.755 { 00:12:59.755 "name": "Malloc1", 00:12:59.755 "aliases": [ 00:12:59.755 "0fad242c-3924-4ec1-90f3-e2714e50d256" 00:12:59.755 ], 00:12:59.755 "product_name": "Malloc disk", 00:12:59.755 "block_size": 512, 00:12:59.755 "num_blocks": 1048576, 00:12:59.755 "uuid": "0fad242c-3924-4ec1-90f3-e2714e50d256", 00:12:59.755 "assigned_rate_limits": { 00:12:59.755 "rw_ios_per_sec": 0, 00:12:59.755 "rw_mbytes_per_sec": 0, 00:12:59.755 "r_mbytes_per_sec": 0, 00:12:59.755 "w_mbytes_per_sec": 0 00:12:59.755 }, 00:12:59.755 "claimed": true, 00:12:59.755 "claim_type": "exclusive_write", 00:12:59.755 "zoned": false, 00:12:59.755 "supported_io_types": { 00:12:59.755 "read": true, 00:12:59.755 "write": true, 00:12:59.755 "unmap": true, 00:12:59.755 "flush": true, 00:12:59.755 "reset": true, 00:12:59.755 "nvme_admin": false, 00:12:59.755 "nvme_io": false, 00:12:59.755 "nvme_io_md": false, 00:12:59.755 "write_zeroes": true, 00:12:59.755 "zcopy": true, 00:12:59.755 "get_zone_info": false, 00:12:59.755 "zone_management": false, 00:12:59.755 "zone_append": false, 00:12:59.755 "compare": false, 00:12:59.755 "compare_and_write": false, 00:12:59.755 "abort": true, 00:12:59.755 "seek_hole": false, 00:12:59.755 "seek_data": false, 00:12:59.755 "copy": true, 00:12:59.755 "nvme_iov_md": false 00:12:59.755 }, 00:12:59.755 "memory_domains": [ 00:12:59.755 { 00:12:59.755 "dma_device_id": "system", 00:12:59.755 "dma_device_type": 1 00:12:59.755 }, 00:12:59.755 { 00:12:59.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.755 "dma_device_type": 2 00:12:59.755 } 00:12:59.755 ], 00:12:59.755 "driver_specific": {} 00:12:59.755 } 00:12:59.755 ]' 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:59.755 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:03.038 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:03.297 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:03.558 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:04.937 ************************************ 00:13:04.937 START TEST filesystem_in_capsule_ext4 00:13:04.937 ************************************ 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:04.937 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:04.937 mke2fs 1.47.0 (5-Feb-2023) 00:13:04.937 Discarding device blocks: 0/522240 done 00:13:04.937 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:04.937 Filesystem UUID: 6473996f-dc33-405d-9a25-78ac5a2cd295 00:13:04.937 Superblock backups stored on blocks: 00:13:04.937 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:04.937 00:13:04.937 Allocating group tables: 0/64 done 00:13:04.937 Writing inode tables: 0/64 done 00:13:07.473 Creating journal (8192 blocks): done 00:13:07.473 Writing superblocks and filesystem accounting information: 0/64 done 00:13:07.473 00:13:07.473 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:07.473 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 833168 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:14.041 00:13:14.041 real 0m9.049s 00:13:14.041 user 0m0.021s 00:13:14.041 sys 0m0.081s 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 ************************************ 00:13:14.041 END TEST filesystem_in_capsule_ext4 00:13:14.041 ************************************ 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 ************************************ 00:13:14.041 START TEST filesystem_in_capsule_btrfs 00:13:14.041 ************************************ 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:14.041 btrfs-progs v6.8.1 00:13:14.041 See https://btrfs.readthedocs.io for more information. 00:13:14.041 00:13:14.041 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:14.041 NOTE: several default settings have changed in version 5.15, please make sure 00:13:14.041 this does not affect your deployments: 00:13:14.041 - DUP for metadata (-m dup) 00:13:14.041 - enabled no-holes (-O no-holes) 00:13:14.041 - enabled free-space-tree (-R free-space-tree) 00:13:14.041 00:13:14.041 Label: (null) 00:13:14.041 UUID: 6c0342c4-7e35-4bb9-b25d-215db1d9e2f2 00:13:14.041 Node size: 16384 00:13:14.041 Sector size: 4096 (CPU page size: 4096) 00:13:14.041 Filesystem size: 510.00MiB 00:13:14.041 Block group profiles: 00:13:14.041 Data: single 8.00MiB 00:13:14.041 Metadata: DUP 32.00MiB 00:13:14.041 System: DUP 8.00MiB 00:13:14.041 SSD detected: yes 00:13:14.041 Zoned device: no 00:13:14.041 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:14.041 Checksum: crc32c 00:13:14.041 Number of devices: 1 00:13:14.041 Devices: 00:13:14.041 ID SIZE PATH 00:13:14.041 1 510.00MiB /dev/nvme0n1p1 00:13:14.041 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:14.041 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 833168 00:13:14.300 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:14.301 00:13:14.301 real 0m0.455s 00:13:14.301 user 0m0.025s 00:13:14.301 sys 0m0.114s 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 ************************************ 00:13:14.301 END TEST filesystem_in_capsule_btrfs 00:13:14.301 ************************************ 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 ************************************ 00:13:14.301 START TEST filesystem_in_capsule_xfs 00:13:14.301 ************************************ 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:14.301 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:14.559 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:14.559 = sectsz=512 attr=2, projid32bit=1 00:13:14.559 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:14.559 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:14.559 data = bsize=4096 blocks=130560, imaxpct=25 00:13:14.559 = sunit=0 swidth=0 blks 00:13:14.559 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:14.559 log =internal log bsize=4096 blocks=16384, version=2 00:13:14.559 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:14.560 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:15.127 Discarding blocks...Done. 00:13:15.127 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:15.127 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 833168 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.032 00:13:17.032 real 0m2.705s 00:13:17.032 user 0m0.024s 00:13:17.032 sys 0m0.072s 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:17.032 ************************************ 00:13:17.032 END TEST filesystem_in_capsule_xfs 00:13:17.032 ************************************ 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:17.032 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 833168 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 833168 ']' 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 833168 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.291 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833168 00:13:17.291 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.291 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.291 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833168' 00:13:17.291 killing process with pid 833168 00:13:17.291 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 833168 00:13:17.291 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 833168 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:17.860 00:13:17.860 real 0m18.538s 00:13:17.860 user 1m13.062s 00:13:17.860 sys 0m1.374s 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.860 ************************************ 00:13:17.860 END TEST nvmf_filesystem_in_capsule 00:13:17.860 ************************************ 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.860 rmmod nvme_tcp 00:13:17.860 rmmod nvme_fabrics 00:13:17.860 rmmod nvme_keyring 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.860 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.766 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.766 00:13:19.766 real 0m46.165s 00:13:19.766 user 2m28.670s 00:13:19.766 sys 0m7.776s 00:13:19.766 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.766 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:19.766 ************************************ 00:13:19.766 END TEST nvmf_filesystem 00:13:19.766 ************************************ 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 ************************************ 00:13:20.025 START TEST nvmf_target_discovery 00:13:20.025 ************************************ 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:20.025 * Looking for test storage... 00:13:20.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.025 --rc genhtml_branch_coverage=1 00:13:20.025 --rc genhtml_function_coverage=1 00:13:20.025 --rc genhtml_legend=1 00:13:20.025 --rc geninfo_all_blocks=1 00:13:20.025 --rc geninfo_unexecuted_blocks=1 00:13:20.025 00:13:20.025 ' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.025 --rc genhtml_branch_coverage=1 00:13:20.025 --rc genhtml_function_coverage=1 00:13:20.025 --rc genhtml_legend=1 00:13:20.025 --rc geninfo_all_blocks=1 00:13:20.025 --rc geninfo_unexecuted_blocks=1 00:13:20.025 00:13:20.025 ' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.025 --rc genhtml_branch_coverage=1 00:13:20.025 --rc genhtml_function_coverage=1 00:13:20.025 --rc genhtml_legend=1 00:13:20.025 --rc geninfo_all_blocks=1 00:13:20.025 --rc geninfo_unexecuted_blocks=1 00:13:20.025 00:13:20.025 ' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.025 --rc genhtml_branch_coverage=1 00:13:20.025 --rc genhtml_function_coverage=1 00:13:20.025 --rc genhtml_legend=1 00:13:20.025 --rc geninfo_all_blocks=1 00:13:20.025 --rc geninfo_unexecuted_blocks=1 00:13:20.025 00:13:20.025 ' 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:20.025 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.026 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:20.285 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.286 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:13:26.932 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:13:26.932 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.932 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:13:26.933 Found net devices under 0000:1a:00.0: cvl_0_0 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:13:26.933 Found net devices under 0000:1a:00.1: cvl_0_1 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:13:26.933 00:13:26.933 --- 10.0.0.2 ping statistics --- 00:13:26.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.933 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:13:26.933 00:13:26.933 --- 10.0.0.1 ping statistics --- 00:13:26.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.933 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.933 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=840805 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 840805 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 840805 ']' 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.933 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.933 [2024-11-20 12:27:32.093938] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:13:26.933 [2024-11-20 12:27:32.093981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.933 [2024-11-20 12:27:32.172132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.933 [2024-11-20 12:27:32.212240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.933 [2024-11-20 12:27:32.212277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.933 [2024-11-20 12:27:32.212283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.933 [2024-11-20 12:27:32.212289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.933 [2024-11-20 12:27:32.212293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.933 [2024-11-20 12:27:32.213966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.933 [2024-11-20 12:27:32.214092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.933 [2024-11-20 12:27:32.214202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.933 [2024-11-20 12:27:32.214203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.193 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 [2024-11-20 12:27:32.958235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 Null1 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 [2024-11-20 12:27:33.003246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 Null2 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 Null3 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 Null4 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.451 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:27.708 00:13:27.708 Discovery Log Number of Records 6, Generation counter 6 00:13:27.708 =====Discovery Log Entry 0====== 00:13:27.708 trtype: tcp 00:13:27.708 adrfam: ipv4 00:13:27.708 subtype: current discovery subsystem 00:13:27.708 treq: not required 00:13:27.708 portid: 0 00:13:27.708 trsvcid: 4420 00:13:27.708 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:27.708 traddr: 10.0.0.2 00:13:27.708 eflags: explicit discovery connections, duplicate discovery information 00:13:27.708 sectype: none 00:13:27.708 =====Discovery Log Entry 1====== 00:13:27.708 trtype: tcp 00:13:27.708 adrfam: ipv4 00:13:27.708 subtype: nvme subsystem 00:13:27.708 treq: not required 00:13:27.708 portid: 0 00:13:27.708 trsvcid: 4420 00:13:27.708 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:27.708 traddr: 10.0.0.2 00:13:27.708 eflags: none 00:13:27.708 sectype: none 00:13:27.708 =====Discovery Log Entry 2====== 00:13:27.708 trtype: tcp 00:13:27.708 adrfam: ipv4 00:13:27.708 subtype: nvme subsystem 00:13:27.708 treq: not required 00:13:27.708 portid: 0 00:13:27.708 trsvcid: 4420 00:13:27.708 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:27.708 traddr: 10.0.0.2 00:13:27.708 eflags: none 00:13:27.708 sectype: none 00:13:27.708 =====Discovery Log Entry 3====== 00:13:27.708 trtype: tcp 00:13:27.708 adrfam: ipv4 00:13:27.708 subtype: nvme subsystem 00:13:27.708 treq: not required 00:13:27.708 portid: 0 00:13:27.708 trsvcid: 4420 00:13:27.708 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:27.708 traddr: 10.0.0.2 00:13:27.708 eflags: none 00:13:27.708 sectype: none 00:13:27.708 =====Discovery Log Entry 4====== 00:13:27.708 trtype: tcp 00:13:27.708 adrfam: ipv4 00:13:27.708 subtype: nvme subsystem 00:13:27.708 treq: not required 00:13:27.708 portid: 0 00:13:27.708 trsvcid: 4420 00:13:27.708 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:27.708 traddr: 10.0.0.2 00:13:27.708 eflags: none 00:13:27.708 sectype: none 00:13:27.708 =====Discovery Log Entry 5====== 00:13:27.708 trtype: tcp 00:13:27.708 adrfam: ipv4 00:13:27.708 subtype: discovery subsystem referral 00:13:27.708 treq: not required 00:13:27.708 portid: 0 00:13:27.708 trsvcid: 4430 00:13:27.708 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:27.708 traddr: 10.0.0.2 00:13:27.708 eflags: none 00:13:27.708 sectype: none 00:13:27.708 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:27.708 Perform nvmf subsystem discovery via RPC 00:13:27.708 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:27.708 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.708 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.708 [ 00:13:27.708 { 00:13:27.708 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:27.708 "subtype": "Discovery", 00:13:27.708 "listen_addresses": [ 00:13:27.708 { 00:13:27.708 "trtype": "TCP", 00:13:27.708 "adrfam": "IPv4", 00:13:27.708 "traddr": "10.0.0.2", 00:13:27.708 "trsvcid": "4420" 00:13:27.708 } 00:13:27.708 ], 00:13:27.708 "allow_any_host": true, 00:13:27.708 "hosts": [] 00:13:27.708 }, 00:13:27.708 { 00:13:27.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.708 "subtype": "NVMe", 00:13:27.708 "listen_addresses": [ 00:13:27.708 { 00:13:27.708 "trtype": "TCP", 00:13:27.708 "adrfam": "IPv4", 00:13:27.708 "traddr": "10.0.0.2", 00:13:27.708 "trsvcid": "4420" 00:13:27.708 } 00:13:27.708 ], 00:13:27.708 "allow_any_host": true, 00:13:27.708 "hosts": [], 00:13:27.708 "serial_number": "SPDK00000000000001", 00:13:27.708 "model_number": "SPDK bdev Controller", 00:13:27.708 "max_namespaces": 32, 00:13:27.708 "min_cntlid": 1, 00:13:27.708 "max_cntlid": 65519, 00:13:27.708 "namespaces": [ 00:13:27.708 { 00:13:27.708 "nsid": 1, 00:13:27.708 "bdev_name": "Null1", 00:13:27.708 "name": "Null1", 00:13:27.708 "nguid": "40641A87093140BEBB813D01DE48E4AB", 00:13:27.708 "uuid": "40641a87-0931-40be-bb81-3d01de48e4ab" 00:13:27.708 } 00:13:27.708 ] 00:13:27.708 }, 00:13:27.708 { 00:13:27.708 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:27.708 "subtype": "NVMe", 00:13:27.708 "listen_addresses": [ 00:13:27.708 { 00:13:27.708 "trtype": "TCP", 00:13:27.708 "adrfam": "IPv4", 00:13:27.708 "traddr": "10.0.0.2", 00:13:27.708 "trsvcid": "4420" 00:13:27.708 } 00:13:27.708 ], 00:13:27.708 "allow_any_host": true, 00:13:27.708 "hosts": [], 00:13:27.708 "serial_number": "SPDK00000000000002", 00:13:27.708 "model_number": "SPDK bdev Controller", 00:13:27.708 "max_namespaces": 32, 00:13:27.708 "min_cntlid": 1, 00:13:27.708 "max_cntlid": 65519, 00:13:27.708 "namespaces": [ 00:13:27.708 { 00:13:27.708 "nsid": 1, 00:13:27.708 "bdev_name": "Null2", 00:13:27.708 "name": "Null2", 00:13:27.708 "nguid": "D7A34F9D37644CC19C6B7D6914E3CE55", 00:13:27.708 "uuid": "d7a34f9d-3764-4cc1-9c6b-7d6914e3ce55" 00:13:27.708 } 00:13:27.708 ] 00:13:27.708 }, 00:13:27.708 { 00:13:27.708 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:27.708 "subtype": "NVMe", 00:13:27.708 "listen_addresses": [ 00:13:27.708 { 00:13:27.708 "trtype": "TCP", 00:13:27.708 "adrfam": "IPv4", 00:13:27.708 "traddr": "10.0.0.2", 00:13:27.708 "trsvcid": "4420" 00:13:27.708 } 00:13:27.708 ], 00:13:27.708 "allow_any_host": true, 00:13:27.708 "hosts": [], 00:13:27.708 "serial_number": "SPDK00000000000003", 00:13:27.708 "model_number": "SPDK bdev Controller", 00:13:27.708 "max_namespaces": 32, 00:13:27.708 "min_cntlid": 1, 00:13:27.708 "max_cntlid": 65519, 00:13:27.708 "namespaces": [ 00:13:27.708 { 00:13:27.708 "nsid": 1, 00:13:27.708 "bdev_name": "Null3", 00:13:27.708 "name": "Null3", 00:13:27.708 "nguid": "9F365FF134BB4EEFA1819DAE095C97D8", 00:13:27.708 "uuid": "9f365ff1-34bb-4eef-a181-9dae095c97d8" 00:13:27.708 } 00:13:27.708 ] 00:13:27.708 }, 00:13:27.708 { 00:13:27.708 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:27.708 "subtype": "NVMe", 00:13:27.708 "listen_addresses": [ 00:13:27.708 { 00:13:27.708 "trtype": "TCP", 00:13:27.708 "adrfam": "IPv4", 00:13:27.708 "traddr": "10.0.0.2", 00:13:27.708 "trsvcid": "4420" 00:13:27.708 } 00:13:27.708 ], 00:13:27.708 "allow_any_host": true, 00:13:27.708 "hosts": [], 00:13:27.708 "serial_number": "SPDK00000000000004", 00:13:27.708 "model_number": "SPDK bdev Controller", 00:13:27.708 "max_namespaces": 32, 00:13:27.708 "min_cntlid": 1, 00:13:27.708 "max_cntlid": 65519, 00:13:27.708 "namespaces": [ 00:13:27.708 { 00:13:27.708 "nsid": 1, 00:13:27.708 "bdev_name": "Null4", 00:13:27.708 "name": "Null4", 00:13:27.708 "nguid": "320015E3F6784475B4260EF135F0DB93", 00:13:27.708 "uuid": "320015e3-f678-4475-b426-0ef135f0db93" 00:13:27.708 } 00:13:27.708 ] 00:13:27.708 } 00:13:27.708 ] 00:13:27.708 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.709 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.709 rmmod nvme_tcp 00:13:27.709 rmmod nvme_fabrics 00:13:27.967 rmmod nvme_keyring 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 840805 ']' 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 840805 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 840805 ']' 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 840805 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 840805 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 840805' 00:13:27.967 killing process with pid 840805 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 840805 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 840805 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.967 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.499 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.499 00:13:30.499 real 0m10.181s 00:13:30.499 user 0m7.992s 00:13:30.499 sys 0m5.119s 00:13:30.499 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.499 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.499 ************************************ 00:13:30.499 END TEST nvmf_target_discovery 00:13:30.499 ************************************ 00:13:30.499 12:27:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:30.499 12:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.500 12:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.500 12:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.500 ************************************ 00:13:30.500 START TEST nvmf_referrals 00:13:30.500 ************************************ 00:13:30.500 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:30.500 * Looking for test storage... 00:13:30.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.500 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:30.500 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:30.500 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:30.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.500 --rc genhtml_branch_coverage=1 00:13:30.500 --rc genhtml_function_coverage=1 00:13:30.500 --rc genhtml_legend=1 00:13:30.500 --rc geninfo_all_blocks=1 00:13:30.500 --rc geninfo_unexecuted_blocks=1 00:13:30.500 00:13:30.500 ' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:30.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.500 --rc genhtml_branch_coverage=1 00:13:30.500 --rc genhtml_function_coverage=1 00:13:30.500 --rc genhtml_legend=1 00:13:30.500 --rc geninfo_all_blocks=1 00:13:30.500 --rc geninfo_unexecuted_blocks=1 00:13:30.500 00:13:30.500 ' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:30.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.500 --rc genhtml_branch_coverage=1 00:13:30.500 --rc genhtml_function_coverage=1 00:13:30.500 --rc genhtml_legend=1 00:13:30.500 --rc geninfo_all_blocks=1 00:13:30.500 --rc geninfo_unexecuted_blocks=1 00:13:30.500 00:13:30.500 ' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:30.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.500 --rc genhtml_branch_coverage=1 00:13:30.500 --rc genhtml_function_coverage=1 00:13:30.500 --rc genhtml_legend=1 00:13:30.500 --rc geninfo_all_blocks=1 00:13:30.500 --rc geninfo_unexecuted_blocks=1 00:13:30.500 00:13:30.500 ' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.500 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.065 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:13:37.066 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:13:37.066 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:13:37.066 Found net devices under 0000:1a:00.0: cvl_0_0 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:13:37.066 Found net devices under 0000:1a:00.1: cvl_0_1 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.066 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:13:37.066 00:13:37.066 --- 10.0.0.2 ping statistics --- 00:13:37.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.066 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:37.066 00:13:37.066 --- 10.0.0.1 ping statistics --- 00:13:37.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.066 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:37.066 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=844863 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 844863 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 844863 ']' 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.067 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.067 [2024-11-20 12:27:42.365070] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:13:37.067 [2024-11-20 12:27:42.365118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.067 [2024-11-20 12:27:42.440160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.067 [2024-11-20 12:27:42.478705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.067 [2024-11-20 12:27:42.478741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.067 [2024-11-20 12:27:42.478747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.067 [2024-11-20 12:27:42.478752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.067 [2024-11-20 12:27:42.478757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.067 [2024-11-20 12:27:42.480387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.067 [2024-11-20 12:27:42.480519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.067 [2024-11-20 12:27:42.480552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.067 [2024-11-20 12:27:42.480554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.635 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.635 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:37.635 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 [2024-11-20 12:27:43.217373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 [2024-11-20 12:27:43.230341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.636 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.895 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.896 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.896 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.896 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.154 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.155 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.413 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.673 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:38.673 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.673 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.673 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:38.673 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.673 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.933 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:39.192 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:39.192 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:39.192 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:39.192 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:39.192 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:39.192 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:39.451 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.711 rmmod nvme_tcp 00:13:39.711 rmmod nvme_fabrics 00:13:39.711 rmmod nvme_keyring 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 844863 ']' 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 844863 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 844863 ']' 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 844863 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 844863 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 844863' 00:13:39.711 killing process with pid 844863 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 844863 00:13:39.711 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 844863 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.970 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.875 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:41.875 00:13:41.875 real 0m11.742s 00:13:41.875 user 0m14.833s 00:13:41.875 sys 0m5.482s 00:13:41.875 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.875 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.875 ************************************ 00:13:41.876 END TEST nvmf_referrals 00:13:41.876 ************************************ 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.135 ************************************ 00:13:42.135 START TEST nvmf_connect_disconnect 00:13:42.135 ************************************ 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:42.135 * Looking for test storage... 00:13:42.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.135 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.136 --rc genhtml_branch_coverage=1 00:13:42.136 --rc genhtml_function_coverage=1 00:13:42.136 --rc genhtml_legend=1 00:13:42.136 --rc geninfo_all_blocks=1 00:13:42.136 --rc geninfo_unexecuted_blocks=1 00:13:42.136 00:13:42.136 ' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.136 --rc genhtml_branch_coverage=1 00:13:42.136 --rc genhtml_function_coverage=1 00:13:42.136 --rc genhtml_legend=1 00:13:42.136 --rc geninfo_all_blocks=1 00:13:42.136 --rc geninfo_unexecuted_blocks=1 00:13:42.136 00:13:42.136 ' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.136 --rc genhtml_branch_coverage=1 00:13:42.136 --rc genhtml_function_coverage=1 00:13:42.136 --rc genhtml_legend=1 00:13:42.136 --rc geninfo_all_blocks=1 00:13:42.136 --rc geninfo_unexecuted_blocks=1 00:13:42.136 00:13:42.136 ' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.136 --rc genhtml_branch_coverage=1 00:13:42.136 --rc genhtml_function_coverage=1 00:13:42.136 --rc genhtml_legend=1 00:13:42.136 --rc geninfo_all_blocks=1 00:13:42.136 --rc geninfo_unexecuted_blocks=1 00:13:42.136 00:13:42.136 ' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.136 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.137 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:13:48.714 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:13:48.714 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.714 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:13:48.715 Found net devices under 0000:1a:00.0: cvl_0_0 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:13:48.715 Found net devices under 0000:1a:00.1: cvl_0_1 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.715 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:13:48.715 00:13:48.715 --- 10.0.0.2 ping statistics --- 00:13:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.715 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:48.715 00:13:48.715 --- 10.0.0.1 ping statistics --- 00:13:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.715 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=849237 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 849237 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 849237 ']' 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.715 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.715 [2024-11-20 12:27:54.122925] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:13:48.715 [2024-11-20 12:27:54.122965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.715 [2024-11-20 12:27:54.198416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.715 [2024-11-20 12:27:54.237744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.715 [2024-11-20 12:27:54.237781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.715 [2024-11-20 12:27:54.237787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.715 [2024-11-20 12:27:54.237793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.715 [2024-11-20 12:27:54.237798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.715 [2024-11-20 12:27:54.239276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.715 [2024-11-20 12:27:54.239395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.715 [2024-11-20 12:27:54.239508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.715 [2024-11-20 12:27:54.239508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.285 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.285 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:49.285 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.285 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.285 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.286 [2024-11-20 12:27:54.980309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.286 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.286 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.545 [2024-11-20 12:27:55.049710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.545 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.545 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:49.545 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:49.545 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:52.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.903 rmmod nvme_tcp 00:14:06.903 rmmod nvme_fabrics 00:14:06.903 rmmod nvme_keyring 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 849237 ']' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 849237 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 849237 ']' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 849237 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849237 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849237' 00:14:06.903 killing process with pid 849237 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 849237 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 849237 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.903 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.810 00:14:08.810 real 0m26.825s 00:14:08.810 user 1m14.141s 00:14:08.810 sys 0m5.969s 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:08.810 ************************************ 00:14:08.810 END TEST nvmf_connect_disconnect 00:14:08.810 ************************************ 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.810 12:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.070 ************************************ 00:14:09.070 START TEST nvmf_multitarget 00:14:09.070 ************************************ 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:09.070 * Looking for test storage... 00:14:09.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.070 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.071 --rc genhtml_branch_coverage=1 00:14:09.071 --rc genhtml_function_coverage=1 00:14:09.071 --rc genhtml_legend=1 00:14:09.071 --rc geninfo_all_blocks=1 00:14:09.071 --rc geninfo_unexecuted_blocks=1 00:14:09.071 00:14:09.071 ' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.071 --rc genhtml_branch_coverage=1 00:14:09.071 --rc genhtml_function_coverage=1 00:14:09.071 --rc genhtml_legend=1 00:14:09.071 --rc geninfo_all_blocks=1 00:14:09.071 --rc geninfo_unexecuted_blocks=1 00:14:09.071 00:14:09.071 ' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.071 --rc genhtml_branch_coverage=1 00:14:09.071 --rc genhtml_function_coverage=1 00:14:09.071 --rc genhtml_legend=1 00:14:09.071 --rc geninfo_all_blocks=1 00:14:09.071 --rc geninfo_unexecuted_blocks=1 00:14:09.071 00:14:09.071 ' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.071 --rc genhtml_branch_coverage=1 00:14:09.071 --rc genhtml_function_coverage=1 00:14:09.071 --rc genhtml_legend=1 00:14:09.071 --rc geninfo_all_blocks=1 00:14:09.071 --rc geninfo_unexecuted_blocks=1 00:14:09.071 00:14:09.071 ' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.071 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.642 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:14:15.643 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:14:15.643 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:14:15.643 Found net devices under 0000:1a:00.0: cvl_0_0 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:14:15.643 Found net devices under 0000:1a:00.1: cvl_0_1 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:14:15.643 00:14:15.643 --- 10.0.0.2 ping statistics --- 00:14:15.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.643 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:15.643 00:14:15.643 --- 10.0.0.1 ping statistics --- 00:14:15.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.643 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=856287 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 856287 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 856287 ']' 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.643 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:15.643 [2024-11-20 12:28:20.989042] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:14:15.643 [2024-11-20 12:28:20.989082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.643 [2024-11-20 12:28:21.067389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.643 [2024-11-20 12:28:21.105192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.644 [2024-11-20 12:28:21.105228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.644 [2024-11-20 12:28:21.105234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.644 [2024-11-20 12:28:21.105240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.644 [2024-11-20 12:28:21.105244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.644 [2024-11-20 12:28:21.106914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.644 [2024-11-20 12:28:21.107044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.644 [2024-11-20 12:28:21.107128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.644 [2024-11-20 12:28:21.107129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:16.212 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:16.471 "nvmf_tgt_1" 00:14:16.471 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:16.471 "nvmf_tgt_2" 00:14:16.471 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:16.471 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:16.730 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:16.730 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:16.730 true 00:14:16.730 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:16.730 true 00:14:16.730 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:16.730 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.989 rmmod nvme_tcp 00:14:16.989 rmmod nvme_fabrics 00:14:16.989 rmmod nvme_keyring 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 856287 ']' 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 856287 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 856287 ']' 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 856287 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 856287 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 856287' 00:14:16.989 killing process with pid 856287 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 856287 00:14:16.989 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 856287 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.249 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.155 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:19.155 00:14:19.155 real 0m10.337s 00:14:19.155 user 0m9.570s 00:14:19.155 sys 0m5.055s 00:14:19.155 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.155 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:19.155 ************************************ 00:14:19.155 END TEST nvmf_multitarget 00:14:19.155 ************************************ 00:14:19.416 12:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:19.416 12:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:19.416 12:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.416 12:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.416 ************************************ 00:14:19.416 START TEST nvmf_rpc 00:14:19.416 ************************************ 00:14:19.416 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:19.416 * Looking for test storage... 00:14:19.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.416 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:19.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.417 --rc genhtml_branch_coverage=1 00:14:19.417 --rc genhtml_function_coverage=1 00:14:19.417 --rc genhtml_legend=1 00:14:19.417 --rc geninfo_all_blocks=1 00:14:19.417 --rc geninfo_unexecuted_blocks=1 00:14:19.417 00:14:19.417 ' 00:14:19.417 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:19.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.417 --rc genhtml_branch_coverage=1 00:14:19.417 --rc genhtml_function_coverage=1 00:14:19.417 --rc genhtml_legend=1 00:14:19.417 --rc geninfo_all_blocks=1 00:14:19.417 --rc geninfo_unexecuted_blocks=1 00:14:19.417 00:14:19.417 ' 00:14:19.417 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:19.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.417 --rc genhtml_branch_coverage=1 00:14:19.417 --rc genhtml_function_coverage=1 00:14:19.417 --rc genhtml_legend=1 00:14:19.417 --rc geninfo_all_blocks=1 00:14:19.417 --rc geninfo_unexecuted_blocks=1 00:14:19.417 00:14:19.417 ' 00:14:19.417 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:19.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.417 --rc genhtml_branch_coverage=1 00:14:19.417 --rc genhtml_function_coverage=1 00:14:19.417 --rc genhtml_legend=1 00:14:19.417 --rc geninfo_all_blocks=1 00:14:19.417 --rc geninfo_unexecuted_blocks=1 00:14:19.417 00:14:19.417 ' 00:14:19.417 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.417 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:19.677 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:14:26.261 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.261 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:14:26.262 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:14:26.262 Found net devices under 0000:1a:00.0: cvl_0_0 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:14:26.262 Found net devices under 0000:1a:00.1: cvl_0_1 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:14:26.262 00:14:26.262 --- 10.0.0.2 ping statistics --- 00:14:26.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.262 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:14:26.262 00:14:26.262 --- 10.0.0.1 ping statistics --- 00:14:26.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.262 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=860355 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 860355 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 860355 ']' 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.262 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.262 [2024-11-20 12:28:31.470217] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:14:26.262 [2024-11-20 12:28:31.470267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.262 [2024-11-20 12:28:31.547890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.262 [2024-11-20 12:28:31.588243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.262 [2024-11-20 12:28:31.588278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.262 [2024-11-20 12:28:31.588285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.262 [2024-11-20 12:28:31.588291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.262 [2024-11-20 12:28:31.588295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.262 [2024-11-20 12:28:31.589788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.262 [2024-11-20 12:28:31.589899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.262 [2024-11-20 12:28:31.589985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.262 [2024-11-20 12:28:31.589986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:26.833 "tick_rate": 2200000000, 00:14:26.833 "poll_groups": [ 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_000", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [] 00:14:26.833 }, 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_001", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [] 00:14:26.833 }, 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_002", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [] 00:14:26.833 }, 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_003", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [] 00:14:26.833 } 00:14:26.833 ] 00:14:26.833 }' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.833 [2024-11-20 12:28:32.432083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:26.833 "tick_rate": 2200000000, 00:14:26.833 "poll_groups": [ 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_000", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [ 00:14:26.833 { 00:14:26.833 "trtype": "TCP" 00:14:26.833 } 00:14:26.833 ] 00:14:26.833 }, 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_001", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [ 00:14:26.833 { 00:14:26.833 "trtype": "TCP" 00:14:26.833 } 00:14:26.833 ] 00:14:26.833 }, 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_002", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [ 00:14:26.833 { 00:14:26.833 "trtype": "TCP" 00:14:26.833 } 00:14:26.833 ] 00:14:26.833 }, 00:14:26.833 { 00:14:26.833 "name": "nvmf_tgt_poll_group_003", 00:14:26.833 "admin_qpairs": 0, 00:14:26.833 "io_qpairs": 0, 00:14:26.833 "current_admin_qpairs": 0, 00:14:26.833 "current_io_qpairs": 0, 00:14:26.833 "pending_bdev_io": 0, 00:14:26.833 "completed_nvme_io": 0, 00:14:26.833 "transports": [ 00:14:26.833 { 00:14:26.833 "trtype": "TCP" 00:14:26.833 } 00:14:26.833 ] 00:14:26.833 } 00:14:26.833 ] 00:14:26.833 }' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.833 Malloc1 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.833 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.093 [2024-11-20 12:28:32.609679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:27.093 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:27.094 [2024-11-20 12:28:32.638300] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562' 00:14:27.094 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:27.094 could not add new controller: failed to write to nvme-fabrics device 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.094 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.548 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.548 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:28.548 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.548 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:28.548 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:30.485 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:30.485 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.486 [2024-11-20 12:28:36.134978] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562' 00:14:30.486 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:30.486 could not add new controller: failed to write to nvme-fabrics device 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.486 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:31.865 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.865 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:31.865 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.865 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:31.865 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:33.771 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.030 [2024-11-20 12:28:39.570811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.030 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:35.407 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:35.407 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:35.407 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.407 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:35.407 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.308 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.308 [2024-11-20 12:28:43.010102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.308 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.682 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.682 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:38.682 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.682 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:38.682 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.212 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 [2024-11-20 12:28:46.541119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.213 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.147 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.147 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:42.147 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.147 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:42.147 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.679 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.679 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.679 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.679 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.679 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:44.679 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.680 [2024-11-20 12:28:50.027280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.680 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.059 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.059 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:46.059 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.059 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:46.059 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.965 [2024-11-20 12:28:53.602615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.965 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.340 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.340 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:49.340 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.340 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:49.340 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:51.245 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.504 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 [2024-11-20 12:28:57.121327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 [2024-11-20 12:28:57.169415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 [2024-11-20 12:28:57.217555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.505 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 [2024-11-20 12:28:57.265712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 [2024-11-20 12:28:57.313892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.765 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:51.765 "tick_rate": 2200000000, 00:14:51.765 "poll_groups": [ 00:14:51.765 { 00:14:51.765 "name": "nvmf_tgt_poll_group_000", 00:14:51.765 "admin_qpairs": 2, 00:14:51.765 "io_qpairs": 196, 00:14:51.765 "current_admin_qpairs": 0, 00:14:51.765 "current_io_qpairs": 0, 00:14:51.765 "pending_bdev_io": 0, 00:14:51.765 "completed_nvme_io": 247, 00:14:51.765 "transports": [ 00:14:51.765 { 00:14:51.765 "trtype": "TCP" 00:14:51.765 } 00:14:51.765 ] 00:14:51.765 }, 00:14:51.765 { 00:14:51.765 "name": "nvmf_tgt_poll_group_001", 00:14:51.765 "admin_qpairs": 2, 00:14:51.765 "io_qpairs": 196, 00:14:51.765 "current_admin_qpairs": 0, 00:14:51.765 "current_io_qpairs": 0, 00:14:51.765 "pending_bdev_io": 0, 00:14:51.765 "completed_nvme_io": 393, 00:14:51.765 "transports": [ 00:14:51.765 { 00:14:51.765 "trtype": "TCP" 00:14:51.765 } 00:14:51.765 ] 00:14:51.765 }, 00:14:51.765 { 00:14:51.765 "name": "nvmf_tgt_poll_group_002", 00:14:51.765 "admin_qpairs": 1, 00:14:51.765 "io_qpairs": 196, 00:14:51.765 "current_admin_qpairs": 0, 00:14:51.765 "current_io_qpairs": 0, 00:14:51.765 "pending_bdev_io": 0, 00:14:51.765 "completed_nvme_io": 253, 00:14:51.765 "transports": [ 00:14:51.765 { 00:14:51.765 "trtype": "TCP" 00:14:51.765 } 00:14:51.765 ] 00:14:51.765 }, 00:14:51.765 { 00:14:51.765 "name": "nvmf_tgt_poll_group_003", 00:14:51.765 "admin_qpairs": 2, 00:14:51.765 "io_qpairs": 196, 00:14:51.765 "current_admin_qpairs": 0, 00:14:51.765 "current_io_qpairs": 0, 00:14:51.765 "pending_bdev_io": 0, 00:14:51.765 "completed_nvme_io": 241, 00:14:51.765 "transports": [ 00:14:51.765 { 00:14:51.765 "trtype": "TCP" 00:14:51.765 } 00:14:51.766 ] 00:14:51.766 } 00:14:51.766 ] 00:14:51.766 }' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.766 rmmod nvme_tcp 00:14:51.766 rmmod nvme_fabrics 00:14:51.766 rmmod nvme_keyring 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 860355 ']' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 860355 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 860355 ']' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 860355 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.766 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 860355 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 860355' 00:14:52.025 killing process with pid 860355 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 860355 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 860355 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.025 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:54.563 00:14:54.563 real 0m34.822s 00:14:54.563 user 1m46.055s 00:14:54.563 sys 0m6.776s 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.563 ************************************ 00:14:54.563 END TEST nvmf_rpc 00:14:54.563 ************************************ 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.563 ************************************ 00:14:54.563 START TEST nvmf_invalid 00:14:54.563 ************************************ 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:54.563 * Looking for test storage... 00:14:54.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.563 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:54.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.563 --rc genhtml_branch_coverage=1 00:14:54.563 --rc genhtml_function_coverage=1 00:14:54.563 --rc genhtml_legend=1 00:14:54.563 --rc geninfo_all_blocks=1 00:14:54.563 --rc geninfo_unexecuted_blocks=1 00:14:54.563 00:14:54.563 ' 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:54.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.563 --rc genhtml_branch_coverage=1 00:14:54.563 --rc genhtml_function_coverage=1 00:14:54.563 --rc genhtml_legend=1 00:14:54.563 --rc geninfo_all_blocks=1 00:14:54.563 --rc geninfo_unexecuted_blocks=1 00:14:54.563 00:14:54.563 ' 00:14:54.563 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:54.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.563 --rc genhtml_branch_coverage=1 00:14:54.563 --rc genhtml_function_coverage=1 00:14:54.563 --rc genhtml_legend=1 00:14:54.563 --rc geninfo_all_blocks=1 00:14:54.564 --rc geninfo_unexecuted_blocks=1 00:14:54.564 00:14:54.564 ' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:54.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.564 --rc genhtml_branch_coverage=1 00:14:54.564 --rc genhtml_function_coverage=1 00:14:54.564 --rc genhtml_legend=1 00:14:54.564 --rc geninfo_all_blocks=1 00:14:54.564 --rc geninfo_unexecuted_blocks=1 00:14:54.564 00:14:54.564 ' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.564 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:01.136 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:15:01.137 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:15:01.137 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:15:01.137 Found net devices under 0000:1a:00.0: cvl_0_0 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:15:01.137 Found net devices under 0000:1a:00.1: cvl_0_1 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.137 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:01.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:15:01.137 00:15:01.137 --- 10.0.0.2 ping statistics --- 00:15:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.137 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:15:01.137 00:15:01.137 --- 10.0.0.1 ping statistics --- 00:15:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.137 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=868787 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 868787 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 868787 ']' 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.137 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.137 [2024-11-20 12:29:06.292594] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:01.137 [2024-11-20 12:29:06.292643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.137 [2024-11-20 12:29:06.369284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.137 [2024-11-20 12:29:06.409769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.137 [2024-11-20 12:29:06.409802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.137 [2024-11-20 12:29:06.409812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.137 [2024-11-20 12:29:06.409817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.138 [2024-11-20 12:29:06.409821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.138 [2024-11-20 12:29:06.411402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.138 [2024-11-20 12:29:06.411529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.138 [2024-11-20 12:29:06.411643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.138 [2024-11-20 12:29:06.411643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.396 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:01.655 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9650 00:15:01.655 [2024-11-20 12:29:07.313365] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:01.655 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:01.655 { 00:15:01.655 "nqn": "nqn.2016-06.io.spdk:cnode9650", 00:15:01.655 "tgt_name": "foobar", 00:15:01.655 "method": "nvmf_create_subsystem", 00:15:01.655 "req_id": 1 00:15:01.655 } 00:15:01.655 Got JSON-RPC error response 00:15:01.655 response: 00:15:01.655 { 00:15:01.655 "code": -32603, 00:15:01.655 "message": "Unable to find target foobar" 00:15:01.655 }' 00:15:01.655 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:01.655 { 00:15:01.655 "nqn": "nqn.2016-06.io.spdk:cnode9650", 00:15:01.655 "tgt_name": "foobar", 00:15:01.655 "method": "nvmf_create_subsystem", 00:15:01.655 "req_id": 1 00:15:01.655 } 00:15:01.655 Got JSON-RPC error response 00:15:01.655 response: 00:15:01.655 { 00:15:01.655 "code": -32603, 00:15:01.655 "message": "Unable to find target foobar" 00:15:01.655 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:01.655 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:01.655 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31233 00:15:01.914 [2024-11-20 12:29:07.506060] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31233: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:01.914 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:01.914 { 00:15:01.914 "nqn": "nqn.2016-06.io.spdk:cnode31233", 00:15:01.914 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:01.914 "method": "nvmf_create_subsystem", 00:15:01.914 "req_id": 1 00:15:01.914 } 00:15:01.914 Got JSON-RPC error response 00:15:01.914 response: 00:15:01.914 { 00:15:01.914 "code": -32602, 00:15:01.914 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:01.914 }' 00:15:01.914 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:01.914 { 00:15:01.914 "nqn": "nqn.2016-06.io.spdk:cnode31233", 00:15:01.914 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:01.914 "method": "nvmf_create_subsystem", 00:15:01.914 "req_id": 1 00:15:01.914 } 00:15:01.914 Got JSON-RPC error response 00:15:01.914 response: 00:15:01.914 { 00:15:01.914 "code": -32602, 00:15:01.915 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:01.915 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:01.915 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:01.915 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17360 00:15:02.174 [2024-11-20 12:29:07.698670] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17360: invalid model number 'SPDK_Controller' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:02.174 { 00:15:02.174 "nqn": "nqn.2016-06.io.spdk:cnode17360", 00:15:02.174 "model_number": "SPDK_Controller\u001f", 00:15:02.174 "method": "nvmf_create_subsystem", 00:15:02.174 "req_id": 1 00:15:02.174 } 00:15:02.174 Got JSON-RPC error response 00:15:02.174 response: 00:15:02.174 { 00:15:02.174 "code": -32602, 00:15:02.174 "message": "Invalid MN SPDK_Controller\u001f" 00:15:02.174 }' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:02.174 { 00:15:02.174 "nqn": "nqn.2016-06.io.spdk:cnode17360", 00:15:02.174 "model_number": "SPDK_Controller\u001f", 00:15:02.174 "method": "nvmf_create_subsystem", 00:15:02.174 "req_id": 1 00:15:02.174 } 00:15:02.174 Got JSON-RPC error response 00:15:02.174 response: 00:15:02.174 { 00:15:02.174 "code": -32602, 00:15:02.174 "message": "Invalid MN SPDK_Controller\u001f" 00:15:02.174 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.174 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5eVG&aMMw9)oZ#N}bXHF' 00:15:02.175 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '5eVG&aMMw9)oZ#N}bXHF' nqn.2016-06.io.spdk:cnode19937 00:15:02.435 [2024-11-20 12:29:08.023733] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19937: invalid serial number '5eVG&aMMw9)oZ#N}bXHF' 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:02.435 { 00:15:02.435 "nqn": "nqn.2016-06.io.spdk:cnode19937", 00:15:02.435 "serial_number": "5eVG&aMMw9\u007f)oZ#N}bXHF", 00:15:02.435 "method": "nvmf_create_subsystem", 00:15:02.435 "req_id": 1 00:15:02.435 } 00:15:02.435 Got JSON-RPC error response 00:15:02.435 response: 00:15:02.435 { 00:15:02.435 "code": -32602, 00:15:02.435 "message": "Invalid SN 5eVG&aMMw9\u007f)oZ#N}bXHF" 00:15:02.435 }' 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:02.435 { 00:15:02.435 "nqn": "nqn.2016-06.io.spdk:cnode19937", 00:15:02.435 "serial_number": "5eVG&aMMw9\u007f)oZ#N}bXHF", 00:15:02.435 "method": "nvmf_create_subsystem", 00:15:02.435 "req_id": 1 00:15:02.435 } 00:15:02.435 Got JSON-RPC error response 00:15:02.435 response: 00:15:02.435 { 00:15:02.435 "code": -32602, 00:15:02.435 "message": "Invalid SN 5eVG&aMMw9\u007f)oZ#N}bXHF" 00:15:02.435 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.435 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:02.436 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@0RC8zPY0c/'\''5[EJV;$NhX$o #IcT@AX=+mZJP:Wc' 00:15:02.696 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '@0RC8zPY0c/'\''5[EJV;$NhX$o #IcT@AX=+mZJP:Wc' nqn.2016-06.io.spdk:cnode7673 00:15:02.955 [2024-11-20 12:29:08.497333] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7673: invalid model number '@0RC8zPY0c/'5[EJV;$NhX$o #IcT@AX=+mZJP:Wc' 00:15:02.955 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:02.955 { 00:15:02.955 "nqn": "nqn.2016-06.io.spdk:cnode7673", 00:15:02.955 "model_number": "@0RC8zPY0c/'\''5[EJV;$NhX$o #IcT@AX=+mZJP:Wc", 00:15:02.955 "method": "nvmf_create_subsystem", 00:15:02.955 "req_id": 1 00:15:02.955 } 00:15:02.955 Got JSON-RPC error response 00:15:02.955 response: 00:15:02.955 { 00:15:02.955 "code": -32602, 00:15:02.955 "message": "Invalid MN @0RC8zPY0c/'\''5[EJV;$NhX$o #IcT@AX=+mZJP:Wc" 00:15:02.955 }' 00:15:02.955 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:02.955 { 00:15:02.955 "nqn": "nqn.2016-06.io.spdk:cnode7673", 00:15:02.955 "model_number": "@0RC8zPY0c/'5[EJV;$NhX$o #IcT@AX=+mZJP:Wc", 00:15:02.955 "method": "nvmf_create_subsystem", 00:15:02.955 "req_id": 1 00:15:02.955 } 00:15:02.955 Got JSON-RPC error response 00:15:02.955 response: 00:15:02.955 { 00:15:02.955 "code": -32602, 00:15:02.955 "message": "Invalid MN @0RC8zPY0c/'5[EJV;$NhX$o #IcT@AX=+mZJP:Wc" 00:15:02.955 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:02.955 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:02.955 [2024-11-20 12:29:08.686030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.215 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:03.215 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:03.215 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:03.215 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:03.215 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:03.215 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:03.474 [2024-11-20 12:29:09.072283] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:03.474 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:03.474 { 00:15:03.474 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:03.474 "listen_address": { 00:15:03.474 "trtype": "tcp", 00:15:03.474 "traddr": "", 00:15:03.474 "trsvcid": "4421" 00:15:03.474 }, 00:15:03.474 "method": "nvmf_subsystem_remove_listener", 00:15:03.474 "req_id": 1 00:15:03.474 } 00:15:03.474 Got JSON-RPC error response 00:15:03.474 response: 00:15:03.474 { 00:15:03.474 "code": -32602, 00:15:03.474 "message": "Invalid parameters" 00:15:03.474 }' 00:15:03.474 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:03.474 { 00:15:03.474 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:03.474 "listen_address": { 00:15:03.474 "trtype": "tcp", 00:15:03.474 "traddr": "", 00:15:03.474 "trsvcid": "4421" 00:15:03.474 }, 00:15:03.474 "method": "nvmf_subsystem_remove_listener", 00:15:03.474 "req_id": 1 00:15:03.474 } 00:15:03.474 Got JSON-RPC error response 00:15:03.474 response: 00:15:03.474 { 00:15:03.474 "code": -32602, 00:15:03.474 "message": "Invalid parameters" 00:15:03.474 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:03.474 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20802 -i 0 00:15:03.733 [2024-11-20 12:29:09.252849] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20802: invalid cntlid range [0-65519] 00:15:03.733 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:03.733 { 00:15:03.733 "nqn": "nqn.2016-06.io.spdk:cnode20802", 00:15:03.733 "min_cntlid": 0, 00:15:03.733 "method": "nvmf_create_subsystem", 00:15:03.733 "req_id": 1 00:15:03.733 } 00:15:03.733 Got JSON-RPC error response 00:15:03.733 response: 00:15:03.733 { 00:15:03.733 "code": -32602, 00:15:03.733 "message": "Invalid cntlid range [0-65519]" 00:15:03.733 }' 00:15:03.733 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:03.733 { 00:15:03.733 "nqn": "nqn.2016-06.io.spdk:cnode20802", 00:15:03.733 "min_cntlid": 0, 00:15:03.733 "method": "nvmf_create_subsystem", 00:15:03.733 "req_id": 1 00:15:03.733 } 00:15:03.733 Got JSON-RPC error response 00:15:03.733 response: 00:15:03.733 { 00:15:03.733 "code": -32602, 00:15:03.733 "message": "Invalid cntlid range [0-65519]" 00:15:03.733 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:03.733 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25398 -i 65520 00:15:03.733 [2024-11-20 12:29:09.437508] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25398: invalid cntlid range [65520-65519] 00:15:03.733 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:03.733 { 00:15:03.733 "nqn": "nqn.2016-06.io.spdk:cnode25398", 00:15:03.733 "min_cntlid": 65520, 00:15:03.733 "method": "nvmf_create_subsystem", 00:15:03.733 "req_id": 1 00:15:03.733 } 00:15:03.733 Got JSON-RPC error response 00:15:03.733 response: 00:15:03.733 { 00:15:03.733 "code": -32602, 00:15:03.733 "message": "Invalid cntlid range [65520-65519]" 00:15:03.733 }' 00:15:03.733 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:03.733 { 00:15:03.733 "nqn": "nqn.2016-06.io.spdk:cnode25398", 00:15:03.733 "min_cntlid": 65520, 00:15:03.733 "method": "nvmf_create_subsystem", 00:15:03.733 "req_id": 1 00:15:03.733 } 00:15:03.733 Got JSON-RPC error response 00:15:03.733 response: 00:15:03.733 { 00:15:03.733 "code": -32602, 00:15:03.733 "message": "Invalid cntlid range [65520-65519]" 00:15:03.733 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:03.733 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13691 -I 0 00:15:03.992 [2024-11-20 12:29:09.630107] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13691: invalid cntlid range [1-0] 00:15:03.992 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:03.992 { 00:15:03.992 "nqn": "nqn.2016-06.io.spdk:cnode13691", 00:15:03.992 "max_cntlid": 0, 00:15:03.992 "method": "nvmf_create_subsystem", 00:15:03.992 "req_id": 1 00:15:03.992 } 00:15:03.992 Got JSON-RPC error response 00:15:03.992 response: 00:15:03.992 { 00:15:03.992 "code": -32602, 00:15:03.992 "message": "Invalid cntlid range [1-0]" 00:15:03.992 }' 00:15:03.992 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:03.992 { 00:15:03.992 "nqn": "nqn.2016-06.io.spdk:cnode13691", 00:15:03.992 "max_cntlid": 0, 00:15:03.992 "method": "nvmf_create_subsystem", 00:15:03.992 "req_id": 1 00:15:03.992 } 00:15:03.992 Got JSON-RPC error response 00:15:03.992 response: 00:15:03.992 { 00:15:03.992 "code": -32602, 00:15:03.992 "message": "Invalid cntlid range [1-0]" 00:15:03.992 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:03.992 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13777 -I 65520 00:15:04.251 [2024-11-20 12:29:09.830762] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13777: invalid cntlid range [1-65520] 00:15:04.251 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:04.251 { 00:15:04.251 "nqn": "nqn.2016-06.io.spdk:cnode13777", 00:15:04.251 "max_cntlid": 65520, 00:15:04.251 "method": "nvmf_create_subsystem", 00:15:04.251 "req_id": 1 00:15:04.251 } 00:15:04.251 Got JSON-RPC error response 00:15:04.251 response: 00:15:04.251 { 00:15:04.251 "code": -32602, 00:15:04.251 "message": "Invalid cntlid range [1-65520]" 00:15:04.251 }' 00:15:04.251 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:04.251 { 00:15:04.251 "nqn": "nqn.2016-06.io.spdk:cnode13777", 00:15:04.251 "max_cntlid": 65520, 00:15:04.251 "method": "nvmf_create_subsystem", 00:15:04.251 "req_id": 1 00:15:04.251 } 00:15:04.251 Got JSON-RPC error response 00:15:04.251 response: 00:15:04.251 { 00:15:04.251 "code": -32602, 00:15:04.251 "message": "Invalid cntlid range [1-65520]" 00:15:04.251 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:04.251 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10687 -i 6 -I 5 00:15:04.510 [2024-11-20 12:29:10.031427] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10687: invalid cntlid range [6-5] 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:04.510 { 00:15:04.510 "nqn": "nqn.2016-06.io.spdk:cnode10687", 00:15:04.510 "min_cntlid": 6, 00:15:04.510 "max_cntlid": 5, 00:15:04.510 "method": "nvmf_create_subsystem", 00:15:04.510 "req_id": 1 00:15:04.510 } 00:15:04.510 Got JSON-RPC error response 00:15:04.510 response: 00:15:04.510 { 00:15:04.510 "code": -32602, 00:15:04.510 "message": "Invalid cntlid range [6-5]" 00:15:04.510 }' 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:04.510 { 00:15:04.510 "nqn": "nqn.2016-06.io.spdk:cnode10687", 00:15:04.510 "min_cntlid": 6, 00:15:04.510 "max_cntlid": 5, 00:15:04.510 "method": "nvmf_create_subsystem", 00:15:04.510 "req_id": 1 00:15:04.510 } 00:15:04.510 Got JSON-RPC error response 00:15:04.510 response: 00:15:04.510 { 00:15:04.510 "code": -32602, 00:15:04.510 "message": "Invalid cntlid range [6-5]" 00:15:04.510 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:04.510 { 00:15:04.510 "name": "foobar", 00:15:04.510 "method": "nvmf_delete_target", 00:15:04.510 "req_id": 1 00:15:04.510 } 00:15:04.510 Got JSON-RPC error response 00:15:04.510 response: 00:15:04.510 { 00:15:04.510 "code": -32602, 00:15:04.510 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:04.510 }' 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:04.510 { 00:15:04.510 "name": "foobar", 00:15:04.510 "method": "nvmf_delete_target", 00:15:04.510 "req_id": 1 00:15:04.510 } 00:15:04.510 Got JSON-RPC error response 00:15:04.510 response: 00:15:04.510 { 00:15:04.510 "code": -32602, 00:15:04.510 "message": "The specified target doesn't exist, cannot delete it." 00:15:04.510 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:04.510 rmmod nvme_tcp 00:15:04.510 rmmod nvme_fabrics 00:15:04.510 rmmod nvme_keyring 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 868787 ']' 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 868787 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 868787 ']' 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 868787 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.510 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 868787 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 868787' 00:15:04.769 killing process with pid 868787 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 868787 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 868787 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.769 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:07.310 00:15:07.310 real 0m12.628s 00:15:07.310 user 0m20.426s 00:15:07.310 sys 0m5.512s 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:07.310 ************************************ 00:15:07.310 END TEST nvmf_invalid 00:15:07.310 ************************************ 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.310 ************************************ 00:15:07.310 START TEST nvmf_connect_stress 00:15:07.310 ************************************ 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:07.310 * Looking for test storage... 00:15:07.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:07.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.310 --rc genhtml_branch_coverage=1 00:15:07.310 --rc genhtml_function_coverage=1 00:15:07.310 --rc genhtml_legend=1 00:15:07.310 --rc geninfo_all_blocks=1 00:15:07.310 --rc geninfo_unexecuted_blocks=1 00:15:07.310 00:15:07.310 ' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:07.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.310 --rc genhtml_branch_coverage=1 00:15:07.310 --rc genhtml_function_coverage=1 00:15:07.310 --rc genhtml_legend=1 00:15:07.310 --rc geninfo_all_blocks=1 00:15:07.310 --rc geninfo_unexecuted_blocks=1 00:15:07.310 00:15:07.310 ' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:07.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.310 --rc genhtml_branch_coverage=1 00:15:07.310 --rc genhtml_function_coverage=1 00:15:07.310 --rc genhtml_legend=1 00:15:07.310 --rc geninfo_all_blocks=1 00:15:07.310 --rc geninfo_unexecuted_blocks=1 00:15:07.310 00:15:07.310 ' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:07.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.310 --rc genhtml_branch_coverage=1 00:15:07.310 --rc genhtml_function_coverage=1 00:15:07.310 --rc genhtml_legend=1 00:15:07.310 --rc geninfo_all_blocks=1 00:15:07.310 --rc geninfo_unexecuted_blocks=1 00:15:07.310 00:15:07.310 ' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.310 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:07.311 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:15:13.884 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:15:13.884 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:15:13.884 Found net devices under 0000:1a:00.0: cvl_0_0 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.884 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:15:13.885 Found net devices under 0000:1a:00.1: cvl_0_1 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:13.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:15:13.885 00:15:13.885 --- 10.0.0.2 ping statistics --- 00:15:13.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.885 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:15:13.885 00:15:13.885 --- 10.0.0.1 ping statistics --- 00:15:13.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.885 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.885 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=873476 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 873476 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 873476 ']' 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 [2024-11-20 12:29:19.055814] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:13.885 [2024-11-20 12:29:19.055860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.885 [2024-11-20 12:29:19.132687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:13.885 [2024-11-20 12:29:19.171680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.885 [2024-11-20 12:29:19.171713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.885 [2024-11-20 12:29:19.171720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.885 [2024-11-20 12:29:19.171726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.885 [2024-11-20 12:29:19.171730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.885 [2024-11-20 12:29:19.173222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.885 [2024-11-20 12:29:19.173333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.885 [2024-11-20 12:29:19.173334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 [2024-11-20 12:29:19.307775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 [2024-11-20 12:29:19.327988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.885 NULL1 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.885 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=873543 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.886 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.145 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.145 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:14.145 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.145 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.145 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.403 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.403 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:14.403 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.404 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.404 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.662 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.662 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:14.662 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.662 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.662 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.229 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.229 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:15.229 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.229 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.229 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.488 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.488 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:15.488 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.488 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.488 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.747 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.747 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:15.747 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.747 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.747 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.006 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.006 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:16.006 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.006 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.006 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.574 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.574 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:16.574 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.574 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.574 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.833 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.833 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:16.833 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.833 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.833 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.092 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.093 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:17.093 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.093 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.093 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.351 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.351 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:17.351 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.351 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.351 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.610 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.610 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:17.610 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.610 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.610 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.178 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.178 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:18.178 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.178 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.178 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.437 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.437 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:18.437 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.437 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.437 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.696 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.696 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:18.696 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.696 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.697 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.956 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.956 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:18.956 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.956 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.956 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.215 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.215 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:19.215 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.215 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.215 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.785 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:19.785 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.785 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.785 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.043 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.044 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:20.044 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.044 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.044 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.303 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.303 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:20.303 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.303 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.303 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.561 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.561 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:20.561 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.561 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.561 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.820 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.820 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:20.820 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.820 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.820 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.387 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.387 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:21.387 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.387 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.387 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.646 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.646 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:21.646 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.646 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.646 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.905 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.905 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:21.905 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.905 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.905 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.164 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.164 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:22.164 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.164 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.164 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.817 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.396 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.396 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:23.396 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.396 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.396 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.657 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.657 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:23.657 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.657 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.657 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.923 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 873543 00:15:23.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (873543) - No such process 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 873543 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.923 rmmod nvme_tcp 00:15:23.923 rmmod nvme_fabrics 00:15:23.923 rmmod nvme_keyring 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 873476 ']' 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 873476 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 873476 ']' 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 873476 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 873476 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 873476' 00:15:23.923 killing process with pid 873476 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 873476 00:15:23.923 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 873476 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.182 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.720 00:15:26.720 real 0m19.288s 00:15:26.720 user 0m39.879s 00:15:26.720 sys 0m8.358s 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.720 ************************************ 00:15:26.720 END TEST nvmf_connect_stress 00:15:26.720 ************************************ 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.720 ************************************ 00:15:26.720 START TEST nvmf_fused_ordering 00:15:26.720 ************************************ 00:15:26.720 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:26.720 * Looking for test storage... 00:15:26.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.720 --rc genhtml_branch_coverage=1 00:15:26.720 --rc genhtml_function_coverage=1 00:15:26.720 --rc genhtml_legend=1 00:15:26.720 --rc geninfo_all_blocks=1 00:15:26.720 --rc geninfo_unexecuted_blocks=1 00:15:26.720 00:15:26.720 ' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.720 --rc genhtml_branch_coverage=1 00:15:26.720 --rc genhtml_function_coverage=1 00:15:26.720 --rc genhtml_legend=1 00:15:26.720 --rc geninfo_all_blocks=1 00:15:26.720 --rc geninfo_unexecuted_blocks=1 00:15:26.720 00:15:26.720 ' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.720 --rc genhtml_branch_coverage=1 00:15:26.720 --rc genhtml_function_coverage=1 00:15:26.720 --rc genhtml_legend=1 00:15:26.720 --rc geninfo_all_blocks=1 00:15:26.720 --rc geninfo_unexecuted_blocks=1 00:15:26.720 00:15:26.720 ' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.720 --rc genhtml_branch_coverage=1 00:15:26.720 --rc genhtml_function_coverage=1 00:15:26.720 --rc genhtml_legend=1 00:15:26.720 --rc geninfo_all_blocks=1 00:15:26.720 --rc geninfo_unexecuted_blocks=1 00:15:26.720 00:15:26.720 ' 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.720 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:26.721 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.292 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:15:33.293 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:15:33.293 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:15:33.293 Found net devices under 0000:1a:00.0: cvl_0_0 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:15:33.293 Found net devices under 0000:1a:00.1: cvl_0_1 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:33.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:15:33.293 00:15:33.293 --- 10.0.0.2 ping statistics --- 00:15:33.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.293 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:15:33.293 00:15:33.293 --- 10.0.0.1 ping statistics --- 00:15:33.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.293 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.293 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=879127 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 879127 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 879127 ']' 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 [2024-11-20 12:29:38.393719] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:33.294 [2024-11-20 12:29:38.393760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.294 [2024-11-20 12:29:38.469048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.294 [2024-11-20 12:29:38.504858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.294 [2024-11-20 12:29:38.504887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.294 [2024-11-20 12:29:38.504893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.294 [2024-11-20 12:29:38.504899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.294 [2024-11-20 12:29:38.504903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.294 [2024-11-20 12:29:38.505454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 [2024-11-20 12:29:38.650175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 [2024-11-20 12:29:38.674365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 NULL1 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.294 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:33.294 [2024-11-20 12:29:38.735607] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:33.294 [2024-11-20 12:29:38.735637] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879198 ] 00:15:33.553 Attached to nqn.2016-06.io.spdk:cnode1 00:15:33.553 Namespace ID: 1 size: 1GB 00:15:33.553 fused_ordering(0) 00:15:33.553 fused_ordering(1) 00:15:33.553 fused_ordering(2) 00:15:33.553 fused_ordering(3) 00:15:33.553 fused_ordering(4) 00:15:33.553 fused_ordering(5) 00:15:33.553 fused_ordering(6) 00:15:33.553 fused_ordering(7) 00:15:33.553 fused_ordering(8) 00:15:33.553 fused_ordering(9) 00:15:33.554 fused_ordering(10) 00:15:33.554 fused_ordering(11) 00:15:33.554 fused_ordering(12) 00:15:33.554 fused_ordering(13) 00:15:33.554 fused_ordering(14) 00:15:33.554 fused_ordering(15) 00:15:33.554 fused_ordering(16) 00:15:33.554 fused_ordering(17) 00:15:33.554 fused_ordering(18) 00:15:33.554 fused_ordering(19) 00:15:33.554 fused_ordering(20) 00:15:33.554 fused_ordering(21) 00:15:33.554 fused_ordering(22) 00:15:33.554 fused_ordering(23) 00:15:33.554 fused_ordering(24) 00:15:33.554 fused_ordering(25) 00:15:33.554 fused_ordering(26) 00:15:33.554 fused_ordering(27) 00:15:33.554 fused_ordering(28) 00:15:33.554 fused_ordering(29) 00:15:33.554 fused_ordering(30) 00:15:33.554 fused_ordering(31) 00:15:33.554 fused_ordering(32) 00:15:33.554 fused_ordering(33) 00:15:33.554 fused_ordering(34) 00:15:33.554 fused_ordering(35) 00:15:33.554 fused_ordering(36) 00:15:33.554 fused_ordering(37) 00:15:33.554 fused_ordering(38) 00:15:33.554 fused_ordering(39) 00:15:33.554 fused_ordering(40) 00:15:33.554 fused_ordering(41) 00:15:33.554 fused_ordering(42) 00:15:33.554 fused_ordering(43) 00:15:33.554 fused_ordering(44) 00:15:33.554 fused_ordering(45) 00:15:33.554 fused_ordering(46) 00:15:33.554 fused_ordering(47) 00:15:33.554 fused_ordering(48) 00:15:33.554 fused_ordering(49) 00:15:33.554 fused_ordering(50) 00:15:33.554 fused_ordering(51) 00:15:33.554 fused_ordering(52) 00:15:33.554 fused_ordering(53) 00:15:33.554 fused_ordering(54) 00:15:33.554 fused_ordering(55) 00:15:33.554 fused_ordering(56) 00:15:33.554 fused_ordering(57) 00:15:33.554 fused_ordering(58) 00:15:33.554 fused_ordering(59) 00:15:33.554 fused_ordering(60) 00:15:33.554 fused_ordering(61) 00:15:33.554 fused_ordering(62) 00:15:33.554 fused_ordering(63) 00:15:33.554 fused_ordering(64) 00:15:33.554 fused_ordering(65) 00:15:33.554 fused_ordering(66) 00:15:33.554 fused_ordering(67) 00:15:33.554 fused_ordering(68) 00:15:33.554 fused_ordering(69) 00:15:33.554 fused_ordering(70) 00:15:33.554 fused_ordering(71) 00:15:33.554 fused_ordering(72) 00:15:33.554 fused_ordering(73) 00:15:33.554 fused_ordering(74) 00:15:33.554 fused_ordering(75) 00:15:33.554 fused_ordering(76) 00:15:33.554 fused_ordering(77) 00:15:33.554 fused_ordering(78) 00:15:33.554 fused_ordering(79) 00:15:33.554 fused_ordering(80) 00:15:33.554 fused_ordering(81) 00:15:33.554 fused_ordering(82) 00:15:33.554 fused_ordering(83) 00:15:33.554 fused_ordering(84) 00:15:33.554 fused_ordering(85) 00:15:33.554 fused_ordering(86) 00:15:33.554 fused_ordering(87) 00:15:33.554 fused_ordering(88) 00:15:33.554 fused_ordering(89) 00:15:33.554 fused_ordering(90) 00:15:33.554 fused_ordering(91) 00:15:33.554 fused_ordering(92) 00:15:33.554 fused_ordering(93) 00:15:33.554 fused_ordering(94) 00:15:33.554 fused_ordering(95) 00:15:33.554 fused_ordering(96) 00:15:33.554 fused_ordering(97) 00:15:33.554 fused_ordering(98) 00:15:33.554 fused_ordering(99) 00:15:33.554 fused_ordering(100) 00:15:33.554 fused_ordering(101) 00:15:33.554 fused_ordering(102) 00:15:33.554 fused_ordering(103) 00:15:33.554 fused_ordering(104) 00:15:33.554 fused_ordering(105) 00:15:33.554 fused_ordering(106) 00:15:33.554 fused_ordering(107) 00:15:33.554 fused_ordering(108) 00:15:33.554 fused_ordering(109) 00:15:33.554 fused_ordering(110) 00:15:33.554 fused_ordering(111) 00:15:33.554 fused_ordering(112) 00:15:33.554 fused_ordering(113) 00:15:33.554 fused_ordering(114) 00:15:33.554 fused_ordering(115) 00:15:33.554 fused_ordering(116) 00:15:33.554 fused_ordering(117) 00:15:33.554 fused_ordering(118) 00:15:33.554 fused_ordering(119) 00:15:33.554 fused_ordering(120) 00:15:33.554 fused_ordering(121) 00:15:33.554 fused_ordering(122) 00:15:33.554 fused_ordering(123) 00:15:33.554 fused_ordering(124) 00:15:33.554 fused_ordering(125) 00:15:33.554 fused_ordering(126) 00:15:33.554 fused_ordering(127) 00:15:33.554 fused_ordering(128) 00:15:33.554 fused_ordering(129) 00:15:33.554 fused_ordering(130) 00:15:33.554 fused_ordering(131) 00:15:33.554 fused_ordering(132) 00:15:33.554 fused_ordering(133) 00:15:33.554 fused_ordering(134) 00:15:33.554 fused_ordering(135) 00:15:33.554 fused_ordering(136) 00:15:33.554 fused_ordering(137) 00:15:33.554 fused_ordering(138) 00:15:33.554 fused_ordering(139) 00:15:33.554 fused_ordering(140) 00:15:33.554 fused_ordering(141) 00:15:33.554 fused_ordering(142) 00:15:33.554 fused_ordering(143) 00:15:33.554 fused_ordering(144) 00:15:33.554 fused_ordering(145) 00:15:33.554 fused_ordering(146) 00:15:33.554 fused_ordering(147) 00:15:33.554 fused_ordering(148) 00:15:33.554 fused_ordering(149) 00:15:33.554 fused_ordering(150) 00:15:33.554 fused_ordering(151) 00:15:33.554 fused_ordering(152) 00:15:33.554 fused_ordering(153) 00:15:33.554 fused_ordering(154) 00:15:33.554 fused_ordering(155) 00:15:33.554 fused_ordering(156) 00:15:33.554 fused_ordering(157) 00:15:33.554 fused_ordering(158) 00:15:33.554 fused_ordering(159) 00:15:33.554 fused_ordering(160) 00:15:33.554 fused_ordering(161) 00:15:33.554 fused_ordering(162) 00:15:33.554 fused_ordering(163) 00:15:33.554 fused_ordering(164) 00:15:33.554 fused_ordering(165) 00:15:33.554 fused_ordering(166) 00:15:33.554 fused_ordering(167) 00:15:33.554 fused_ordering(168) 00:15:33.554 fused_ordering(169) 00:15:33.554 fused_ordering(170) 00:15:33.554 fused_ordering(171) 00:15:33.554 fused_ordering(172) 00:15:33.554 fused_ordering(173) 00:15:33.554 fused_ordering(174) 00:15:33.554 fused_ordering(175) 00:15:33.554 fused_ordering(176) 00:15:33.554 fused_ordering(177) 00:15:33.554 fused_ordering(178) 00:15:33.554 fused_ordering(179) 00:15:33.554 fused_ordering(180) 00:15:33.554 fused_ordering(181) 00:15:33.554 fused_ordering(182) 00:15:33.554 fused_ordering(183) 00:15:33.554 fused_ordering(184) 00:15:33.554 fused_ordering(185) 00:15:33.554 fused_ordering(186) 00:15:33.554 fused_ordering(187) 00:15:33.554 fused_ordering(188) 00:15:33.554 fused_ordering(189) 00:15:33.554 fused_ordering(190) 00:15:33.554 fused_ordering(191) 00:15:33.554 fused_ordering(192) 00:15:33.554 fused_ordering(193) 00:15:33.554 fused_ordering(194) 00:15:33.554 fused_ordering(195) 00:15:33.554 fused_ordering(196) 00:15:33.554 fused_ordering(197) 00:15:33.554 fused_ordering(198) 00:15:33.554 fused_ordering(199) 00:15:33.554 fused_ordering(200) 00:15:33.554 fused_ordering(201) 00:15:33.554 fused_ordering(202) 00:15:33.554 fused_ordering(203) 00:15:33.554 fused_ordering(204) 00:15:33.554 fused_ordering(205) 00:15:33.554 fused_ordering(206) 00:15:33.554 fused_ordering(207) 00:15:33.554 fused_ordering(208) 00:15:33.554 fused_ordering(209) 00:15:33.554 fused_ordering(210) 00:15:33.554 fused_ordering(211) 00:15:33.554 fused_ordering(212) 00:15:33.554 fused_ordering(213) 00:15:33.554 fused_ordering(214) 00:15:33.554 fused_ordering(215) 00:15:33.554 fused_ordering(216) 00:15:33.554 fused_ordering(217) 00:15:33.554 fused_ordering(218) 00:15:33.554 fused_ordering(219) 00:15:33.554 fused_ordering(220) 00:15:33.554 fused_ordering(221) 00:15:33.554 fused_ordering(222) 00:15:33.554 fused_ordering(223) 00:15:33.554 fused_ordering(224) 00:15:33.554 fused_ordering(225) 00:15:33.554 fused_ordering(226) 00:15:33.554 fused_ordering(227) 00:15:33.554 fused_ordering(228) 00:15:33.554 fused_ordering(229) 00:15:33.554 fused_ordering(230) 00:15:33.554 fused_ordering(231) 00:15:33.554 fused_ordering(232) 00:15:33.554 fused_ordering(233) 00:15:33.554 fused_ordering(234) 00:15:33.554 fused_ordering(235) 00:15:33.554 fused_ordering(236) 00:15:33.554 fused_ordering(237) 00:15:33.554 fused_ordering(238) 00:15:33.554 fused_ordering(239) 00:15:33.554 fused_ordering(240) 00:15:33.554 fused_ordering(241) 00:15:33.554 fused_ordering(242) 00:15:33.554 fused_ordering(243) 00:15:33.554 fused_ordering(244) 00:15:33.554 fused_ordering(245) 00:15:33.554 fused_ordering(246) 00:15:33.554 fused_ordering(247) 00:15:33.554 fused_ordering(248) 00:15:33.554 fused_ordering(249) 00:15:33.554 fused_ordering(250) 00:15:33.554 fused_ordering(251) 00:15:33.554 fused_ordering(252) 00:15:33.554 fused_ordering(253) 00:15:33.554 fused_ordering(254) 00:15:33.554 fused_ordering(255) 00:15:33.554 fused_ordering(256) 00:15:33.554 fused_ordering(257) 00:15:33.554 fused_ordering(258) 00:15:33.554 fused_ordering(259) 00:15:33.554 fused_ordering(260) 00:15:33.554 fused_ordering(261) 00:15:33.554 fused_ordering(262) 00:15:33.554 fused_ordering(263) 00:15:33.554 fused_ordering(264) 00:15:33.554 fused_ordering(265) 00:15:33.554 fused_ordering(266) 00:15:33.554 fused_ordering(267) 00:15:33.554 fused_ordering(268) 00:15:33.554 fused_ordering(269) 00:15:33.554 fused_ordering(270) 00:15:33.554 fused_ordering(271) 00:15:33.554 fused_ordering(272) 00:15:33.554 fused_ordering(273) 00:15:33.554 fused_ordering(274) 00:15:33.554 fused_ordering(275) 00:15:33.554 fused_ordering(276) 00:15:33.554 fused_ordering(277) 00:15:33.554 fused_ordering(278) 00:15:33.554 fused_ordering(279) 00:15:33.554 fused_ordering(280) 00:15:33.554 fused_ordering(281) 00:15:33.554 fused_ordering(282) 00:15:33.554 fused_ordering(283) 00:15:33.555 fused_ordering(284) 00:15:33.555 fused_ordering(285) 00:15:33.555 fused_ordering(286) 00:15:33.555 fused_ordering(287) 00:15:33.555 fused_ordering(288) 00:15:33.555 fused_ordering(289) 00:15:33.555 fused_ordering(290) 00:15:33.555 fused_ordering(291) 00:15:33.555 fused_ordering(292) 00:15:33.555 fused_ordering(293) 00:15:33.555 fused_ordering(294) 00:15:33.555 fused_ordering(295) 00:15:33.555 fused_ordering(296) 00:15:33.555 fused_ordering(297) 00:15:33.555 fused_ordering(298) 00:15:33.555 fused_ordering(299) 00:15:33.555 fused_ordering(300) 00:15:33.555 fused_ordering(301) 00:15:33.555 fused_ordering(302) 00:15:33.555 fused_ordering(303) 00:15:33.555 fused_ordering(304) 00:15:33.555 fused_ordering(305) 00:15:33.555 fused_ordering(306) 00:15:33.555 fused_ordering(307) 00:15:33.555 fused_ordering(308) 00:15:33.555 fused_ordering(309) 00:15:33.555 fused_ordering(310) 00:15:33.555 fused_ordering(311) 00:15:33.555 fused_ordering(312) 00:15:33.555 fused_ordering(313) 00:15:33.555 fused_ordering(314) 00:15:33.555 fused_ordering(315) 00:15:33.555 fused_ordering(316) 00:15:33.555 fused_ordering(317) 00:15:33.555 fused_ordering(318) 00:15:33.555 fused_ordering(319) 00:15:33.555 fused_ordering(320) 00:15:33.555 fused_ordering(321) 00:15:33.555 fused_ordering(322) 00:15:33.555 fused_ordering(323) 00:15:33.555 fused_ordering(324) 00:15:33.555 fused_ordering(325) 00:15:33.555 fused_ordering(326) 00:15:33.555 fused_ordering(327) 00:15:33.555 fused_ordering(328) 00:15:33.555 fused_ordering(329) 00:15:33.555 fused_ordering(330) 00:15:33.555 fused_ordering(331) 00:15:33.555 fused_ordering(332) 00:15:33.555 fused_ordering(333) 00:15:33.555 fused_ordering(334) 00:15:33.555 fused_ordering(335) 00:15:33.555 fused_ordering(336) 00:15:33.555 fused_ordering(337) 00:15:33.555 fused_ordering(338) 00:15:33.555 fused_ordering(339) 00:15:33.555 fused_ordering(340) 00:15:33.555 fused_ordering(341) 00:15:33.555 fused_ordering(342) 00:15:33.555 fused_ordering(343) 00:15:33.555 fused_ordering(344) 00:15:33.555 fused_ordering(345) 00:15:33.555 fused_ordering(346) 00:15:33.555 fused_ordering(347) 00:15:33.555 fused_ordering(348) 00:15:33.555 fused_ordering(349) 00:15:33.555 fused_ordering(350) 00:15:33.555 fused_ordering(351) 00:15:33.555 fused_ordering(352) 00:15:33.555 fused_ordering(353) 00:15:33.555 fused_ordering(354) 00:15:33.555 fused_ordering(355) 00:15:33.555 fused_ordering(356) 00:15:33.555 fused_ordering(357) 00:15:33.555 fused_ordering(358) 00:15:33.555 fused_ordering(359) 00:15:33.555 fused_ordering(360) 00:15:33.555 fused_ordering(361) 00:15:33.555 fused_ordering(362) 00:15:33.555 fused_ordering(363) 00:15:33.555 fused_ordering(364) 00:15:33.555 fused_ordering(365) 00:15:33.555 fused_ordering(366) 00:15:33.555 fused_ordering(367) 00:15:33.555 fused_ordering(368) 00:15:33.555 fused_ordering(369) 00:15:33.555 fused_ordering(370) 00:15:33.555 fused_ordering(371) 00:15:33.555 fused_ordering(372) 00:15:33.555 fused_ordering(373) 00:15:33.555 fused_ordering(374) 00:15:33.555 fused_ordering(375) 00:15:33.555 fused_ordering(376) 00:15:33.555 fused_ordering(377) 00:15:33.555 fused_ordering(378) 00:15:33.555 fused_ordering(379) 00:15:33.555 fused_ordering(380) 00:15:33.555 fused_ordering(381) 00:15:33.555 fused_ordering(382) 00:15:33.555 fused_ordering(383) 00:15:33.555 fused_ordering(384) 00:15:33.555 fused_ordering(385) 00:15:33.555 fused_ordering(386) 00:15:33.555 fused_ordering(387) 00:15:33.555 fused_ordering(388) 00:15:33.555 fused_ordering(389) 00:15:33.555 fused_ordering(390) 00:15:33.555 fused_ordering(391) 00:15:33.555 fused_ordering(392) 00:15:33.555 fused_ordering(393) 00:15:33.555 fused_ordering(394) 00:15:33.555 fused_ordering(395) 00:15:33.555 fused_ordering(396) 00:15:33.555 fused_ordering(397) 00:15:33.555 fused_ordering(398) 00:15:33.555 fused_ordering(399) 00:15:33.555 fused_ordering(400) 00:15:33.555 fused_ordering(401) 00:15:33.555 fused_ordering(402) 00:15:33.555 fused_ordering(403) 00:15:33.555 fused_ordering(404) 00:15:33.555 fused_ordering(405) 00:15:33.555 fused_ordering(406) 00:15:33.555 fused_ordering(407) 00:15:33.555 fused_ordering(408) 00:15:33.555 fused_ordering(409) 00:15:33.555 fused_ordering(410) 00:15:33.813 fused_ordering(411) 00:15:33.813 fused_ordering(412) 00:15:33.813 fused_ordering(413) 00:15:33.813 fused_ordering(414) 00:15:33.813 fused_ordering(415) 00:15:33.813 fused_ordering(416) 00:15:33.813 fused_ordering(417) 00:15:33.813 fused_ordering(418) 00:15:33.813 fused_ordering(419) 00:15:33.813 fused_ordering(420) 00:15:33.813 fused_ordering(421) 00:15:33.813 fused_ordering(422) 00:15:33.813 fused_ordering(423) 00:15:33.813 fused_ordering(424) 00:15:33.813 fused_ordering(425) 00:15:33.813 fused_ordering(426) 00:15:33.813 fused_ordering(427) 00:15:33.813 fused_ordering(428) 00:15:33.813 fused_ordering(429) 00:15:33.813 fused_ordering(430) 00:15:33.813 fused_ordering(431) 00:15:33.813 fused_ordering(432) 00:15:33.813 fused_ordering(433) 00:15:33.813 fused_ordering(434) 00:15:33.813 fused_ordering(435) 00:15:33.813 fused_ordering(436) 00:15:33.813 fused_ordering(437) 00:15:33.813 fused_ordering(438) 00:15:33.814 fused_ordering(439) 00:15:33.814 fused_ordering(440) 00:15:33.814 fused_ordering(441) 00:15:33.814 fused_ordering(442) 00:15:33.814 fused_ordering(443) 00:15:33.814 fused_ordering(444) 00:15:33.814 fused_ordering(445) 00:15:33.814 fused_ordering(446) 00:15:33.814 fused_ordering(447) 00:15:33.814 fused_ordering(448) 00:15:33.814 fused_ordering(449) 00:15:33.814 fused_ordering(450) 00:15:33.814 fused_ordering(451) 00:15:33.814 fused_ordering(452) 00:15:33.814 fused_ordering(453) 00:15:33.814 fused_ordering(454) 00:15:33.814 fused_ordering(455) 00:15:33.814 fused_ordering(456) 00:15:33.814 fused_ordering(457) 00:15:33.814 fused_ordering(458) 00:15:33.814 fused_ordering(459) 00:15:33.814 fused_ordering(460) 00:15:33.814 fused_ordering(461) 00:15:33.814 fused_ordering(462) 00:15:33.814 fused_ordering(463) 00:15:33.814 fused_ordering(464) 00:15:33.814 fused_ordering(465) 00:15:33.814 fused_ordering(466) 00:15:33.814 fused_ordering(467) 00:15:33.814 fused_ordering(468) 00:15:33.814 fused_ordering(469) 00:15:33.814 fused_ordering(470) 00:15:33.814 fused_ordering(471) 00:15:33.814 fused_ordering(472) 00:15:33.814 fused_ordering(473) 00:15:33.814 fused_ordering(474) 00:15:33.814 fused_ordering(475) 00:15:33.814 fused_ordering(476) 00:15:33.814 fused_ordering(477) 00:15:33.814 fused_ordering(478) 00:15:33.814 fused_ordering(479) 00:15:33.814 fused_ordering(480) 00:15:33.814 fused_ordering(481) 00:15:33.814 fused_ordering(482) 00:15:33.814 fused_ordering(483) 00:15:33.814 fused_ordering(484) 00:15:33.814 fused_ordering(485) 00:15:33.814 fused_ordering(486) 00:15:33.814 fused_ordering(487) 00:15:33.814 fused_ordering(488) 00:15:33.814 fused_ordering(489) 00:15:33.814 fused_ordering(490) 00:15:33.814 fused_ordering(491) 00:15:33.814 fused_ordering(492) 00:15:33.814 fused_ordering(493) 00:15:33.814 fused_ordering(494) 00:15:33.814 fused_ordering(495) 00:15:33.814 fused_ordering(496) 00:15:33.814 fused_ordering(497) 00:15:33.814 fused_ordering(498) 00:15:33.814 fused_ordering(499) 00:15:33.814 fused_ordering(500) 00:15:33.814 fused_ordering(501) 00:15:33.814 fused_ordering(502) 00:15:33.814 fused_ordering(503) 00:15:33.814 fused_ordering(504) 00:15:33.814 fused_ordering(505) 00:15:33.814 fused_ordering(506) 00:15:33.814 fused_ordering(507) 00:15:33.814 fused_ordering(508) 00:15:33.814 fused_ordering(509) 00:15:33.814 fused_ordering(510) 00:15:33.814 fused_ordering(511) 00:15:33.814 fused_ordering(512) 00:15:33.814 fused_ordering(513) 00:15:33.814 fused_ordering(514) 00:15:33.814 fused_ordering(515) 00:15:33.814 fused_ordering(516) 00:15:33.814 fused_ordering(517) 00:15:33.814 fused_ordering(518) 00:15:33.814 fused_ordering(519) 00:15:33.814 fused_ordering(520) 00:15:33.814 fused_ordering(521) 00:15:33.814 fused_ordering(522) 00:15:33.814 fused_ordering(523) 00:15:33.814 fused_ordering(524) 00:15:33.814 fused_ordering(525) 00:15:33.814 fused_ordering(526) 00:15:33.814 fused_ordering(527) 00:15:33.814 fused_ordering(528) 00:15:33.814 fused_ordering(529) 00:15:33.814 fused_ordering(530) 00:15:33.814 fused_ordering(531) 00:15:33.814 fused_ordering(532) 00:15:33.814 fused_ordering(533) 00:15:33.814 fused_ordering(534) 00:15:33.814 fused_ordering(535) 00:15:33.814 fused_ordering(536) 00:15:33.814 fused_ordering(537) 00:15:33.814 fused_ordering(538) 00:15:33.814 fused_ordering(539) 00:15:33.814 fused_ordering(540) 00:15:33.814 fused_ordering(541) 00:15:33.814 fused_ordering(542) 00:15:33.814 fused_ordering(543) 00:15:33.814 fused_ordering(544) 00:15:33.814 fused_ordering(545) 00:15:33.814 fused_ordering(546) 00:15:33.814 fused_ordering(547) 00:15:33.814 fused_ordering(548) 00:15:33.814 fused_ordering(549) 00:15:33.814 fused_ordering(550) 00:15:33.814 fused_ordering(551) 00:15:33.814 fused_ordering(552) 00:15:33.814 fused_ordering(553) 00:15:33.814 fused_ordering(554) 00:15:33.814 fused_ordering(555) 00:15:33.814 fused_ordering(556) 00:15:33.814 fused_ordering(557) 00:15:33.814 fused_ordering(558) 00:15:33.814 fused_ordering(559) 00:15:33.814 fused_ordering(560) 00:15:33.814 fused_ordering(561) 00:15:33.814 fused_ordering(562) 00:15:33.814 fused_ordering(563) 00:15:33.814 fused_ordering(564) 00:15:33.814 fused_ordering(565) 00:15:33.814 fused_ordering(566) 00:15:33.814 fused_ordering(567) 00:15:33.814 fused_ordering(568) 00:15:33.814 fused_ordering(569) 00:15:33.814 fused_ordering(570) 00:15:33.814 fused_ordering(571) 00:15:33.814 fused_ordering(572) 00:15:33.814 fused_ordering(573) 00:15:33.814 fused_ordering(574) 00:15:33.814 fused_ordering(575) 00:15:33.814 fused_ordering(576) 00:15:33.814 fused_ordering(577) 00:15:33.814 fused_ordering(578) 00:15:33.814 fused_ordering(579) 00:15:33.814 fused_ordering(580) 00:15:33.814 fused_ordering(581) 00:15:33.814 fused_ordering(582) 00:15:33.814 fused_ordering(583) 00:15:33.814 fused_ordering(584) 00:15:33.814 fused_ordering(585) 00:15:33.814 fused_ordering(586) 00:15:33.814 fused_ordering(587) 00:15:33.814 fused_ordering(588) 00:15:33.814 fused_ordering(589) 00:15:33.814 fused_ordering(590) 00:15:33.814 fused_ordering(591) 00:15:33.814 fused_ordering(592) 00:15:33.814 fused_ordering(593) 00:15:33.814 fused_ordering(594) 00:15:33.814 fused_ordering(595) 00:15:33.814 fused_ordering(596) 00:15:33.814 fused_ordering(597) 00:15:33.814 fused_ordering(598) 00:15:33.814 fused_ordering(599) 00:15:33.814 fused_ordering(600) 00:15:33.814 fused_ordering(601) 00:15:33.814 fused_ordering(602) 00:15:33.814 fused_ordering(603) 00:15:33.814 fused_ordering(604) 00:15:33.814 fused_ordering(605) 00:15:33.814 fused_ordering(606) 00:15:33.814 fused_ordering(607) 00:15:33.814 fused_ordering(608) 00:15:33.814 fused_ordering(609) 00:15:33.814 fused_ordering(610) 00:15:33.814 fused_ordering(611) 00:15:33.814 fused_ordering(612) 00:15:33.814 fused_ordering(613) 00:15:33.814 fused_ordering(614) 00:15:33.814 fused_ordering(615) 00:15:34.382 fused_ordering(616) 00:15:34.382 fused_ordering(617) 00:15:34.382 fused_ordering(618) 00:15:34.382 fused_ordering(619) 00:15:34.382 fused_ordering(620) 00:15:34.382 fused_ordering(621) 00:15:34.382 fused_ordering(622) 00:15:34.383 fused_ordering(623) 00:15:34.383 fused_ordering(624) 00:15:34.383 fused_ordering(625) 00:15:34.383 fused_ordering(626) 00:15:34.383 fused_ordering(627) 00:15:34.383 fused_ordering(628) 00:15:34.383 fused_ordering(629) 00:15:34.383 fused_ordering(630) 00:15:34.383 fused_ordering(631) 00:15:34.383 fused_ordering(632) 00:15:34.383 fused_ordering(633) 00:15:34.383 fused_ordering(634) 00:15:34.383 fused_ordering(635) 00:15:34.383 fused_ordering(636) 00:15:34.383 fused_ordering(637) 00:15:34.383 fused_ordering(638) 00:15:34.383 fused_ordering(639) 00:15:34.383 fused_ordering(640) 00:15:34.383 fused_ordering(641) 00:15:34.383 fused_ordering(642) 00:15:34.383 fused_ordering(643) 00:15:34.383 fused_ordering(644) 00:15:34.383 fused_ordering(645) 00:15:34.383 fused_ordering(646) 00:15:34.383 fused_ordering(647) 00:15:34.383 fused_ordering(648) 00:15:34.383 fused_ordering(649) 00:15:34.383 fused_ordering(650) 00:15:34.383 fused_ordering(651) 00:15:34.383 fused_ordering(652) 00:15:34.383 fused_ordering(653) 00:15:34.383 fused_ordering(654) 00:15:34.383 fused_ordering(655) 00:15:34.383 fused_ordering(656) 00:15:34.383 fused_ordering(657) 00:15:34.383 fused_ordering(658) 00:15:34.383 fused_ordering(659) 00:15:34.383 fused_ordering(660) 00:15:34.383 fused_ordering(661) 00:15:34.383 fused_ordering(662) 00:15:34.383 fused_ordering(663) 00:15:34.383 fused_ordering(664) 00:15:34.383 fused_ordering(665) 00:15:34.383 fused_ordering(666) 00:15:34.383 fused_ordering(667) 00:15:34.383 fused_ordering(668) 00:15:34.383 fused_ordering(669) 00:15:34.383 fused_ordering(670) 00:15:34.383 fused_ordering(671) 00:15:34.383 fused_ordering(672) 00:15:34.383 fused_ordering(673) 00:15:34.383 fused_ordering(674) 00:15:34.383 fused_ordering(675) 00:15:34.383 fused_ordering(676) 00:15:34.383 fused_ordering(677) 00:15:34.383 fused_ordering(678) 00:15:34.383 fused_ordering(679) 00:15:34.383 fused_ordering(680) 00:15:34.383 fused_ordering(681) 00:15:34.383 fused_ordering(682) 00:15:34.383 fused_ordering(683) 00:15:34.383 fused_ordering(684) 00:15:34.383 fused_ordering(685) 00:15:34.383 fused_ordering(686) 00:15:34.383 fused_ordering(687) 00:15:34.383 fused_ordering(688) 00:15:34.383 fused_ordering(689) 00:15:34.383 fused_ordering(690) 00:15:34.383 fused_ordering(691) 00:15:34.383 fused_ordering(692) 00:15:34.383 fused_ordering(693) 00:15:34.383 fused_ordering(694) 00:15:34.383 fused_ordering(695) 00:15:34.383 fused_ordering(696) 00:15:34.383 fused_ordering(697) 00:15:34.383 fused_ordering(698) 00:15:34.383 fused_ordering(699) 00:15:34.383 fused_ordering(700) 00:15:34.383 fused_ordering(701) 00:15:34.383 fused_ordering(702) 00:15:34.383 fused_ordering(703) 00:15:34.383 fused_ordering(704) 00:15:34.383 fused_ordering(705) 00:15:34.383 fused_ordering(706) 00:15:34.383 fused_ordering(707) 00:15:34.383 fused_ordering(708) 00:15:34.383 fused_ordering(709) 00:15:34.383 fused_ordering(710) 00:15:34.383 fused_ordering(711) 00:15:34.383 fused_ordering(712) 00:15:34.383 fused_ordering(713) 00:15:34.383 fused_ordering(714) 00:15:34.383 fused_ordering(715) 00:15:34.383 fused_ordering(716) 00:15:34.383 fused_ordering(717) 00:15:34.383 fused_ordering(718) 00:15:34.383 fused_ordering(719) 00:15:34.383 fused_ordering(720) 00:15:34.383 fused_ordering(721) 00:15:34.383 fused_ordering(722) 00:15:34.383 fused_ordering(723) 00:15:34.383 fused_ordering(724) 00:15:34.383 fused_ordering(725) 00:15:34.383 fused_ordering(726) 00:15:34.383 fused_ordering(727) 00:15:34.383 fused_ordering(728) 00:15:34.383 fused_ordering(729) 00:15:34.383 fused_ordering(730) 00:15:34.383 fused_ordering(731) 00:15:34.383 fused_ordering(732) 00:15:34.383 fused_ordering(733) 00:15:34.383 fused_ordering(734) 00:15:34.383 fused_ordering(735) 00:15:34.383 fused_ordering(736) 00:15:34.383 fused_ordering(737) 00:15:34.383 fused_ordering(738) 00:15:34.383 fused_ordering(739) 00:15:34.383 fused_ordering(740) 00:15:34.383 fused_ordering(741) 00:15:34.383 fused_ordering(742) 00:15:34.383 fused_ordering(743) 00:15:34.383 fused_ordering(744) 00:15:34.383 fused_ordering(745) 00:15:34.383 fused_ordering(746) 00:15:34.383 fused_ordering(747) 00:15:34.383 fused_ordering(748) 00:15:34.383 fused_ordering(749) 00:15:34.383 fused_ordering(750) 00:15:34.383 fused_ordering(751) 00:15:34.383 fused_ordering(752) 00:15:34.383 fused_ordering(753) 00:15:34.383 fused_ordering(754) 00:15:34.383 fused_ordering(755) 00:15:34.383 fused_ordering(756) 00:15:34.383 fused_ordering(757) 00:15:34.383 fused_ordering(758) 00:15:34.383 fused_ordering(759) 00:15:34.383 fused_ordering(760) 00:15:34.383 fused_ordering(761) 00:15:34.383 fused_ordering(762) 00:15:34.383 fused_ordering(763) 00:15:34.383 fused_ordering(764) 00:15:34.383 fused_ordering(765) 00:15:34.383 fused_ordering(766) 00:15:34.383 fused_ordering(767) 00:15:34.383 fused_ordering(768) 00:15:34.383 fused_ordering(769) 00:15:34.383 fused_ordering(770) 00:15:34.383 fused_ordering(771) 00:15:34.383 fused_ordering(772) 00:15:34.383 fused_ordering(773) 00:15:34.383 fused_ordering(774) 00:15:34.383 fused_ordering(775) 00:15:34.383 fused_ordering(776) 00:15:34.383 fused_ordering(777) 00:15:34.383 fused_ordering(778) 00:15:34.383 fused_ordering(779) 00:15:34.383 fused_ordering(780) 00:15:34.383 fused_ordering(781) 00:15:34.383 fused_ordering(782) 00:15:34.383 fused_ordering(783) 00:15:34.383 fused_ordering(784) 00:15:34.383 fused_ordering(785) 00:15:34.383 fused_ordering(786) 00:15:34.383 fused_ordering(787) 00:15:34.383 fused_ordering(788) 00:15:34.383 fused_ordering(789) 00:15:34.383 fused_ordering(790) 00:15:34.383 fused_ordering(791) 00:15:34.383 fused_ordering(792) 00:15:34.383 fused_ordering(793) 00:15:34.383 fused_ordering(794) 00:15:34.383 fused_ordering(795) 00:15:34.383 fused_ordering(796) 00:15:34.383 fused_ordering(797) 00:15:34.383 fused_ordering(798) 00:15:34.383 fused_ordering(799) 00:15:34.383 fused_ordering(800) 00:15:34.383 fused_ordering(801) 00:15:34.383 fused_ordering(802) 00:15:34.383 fused_ordering(803) 00:15:34.383 fused_ordering(804) 00:15:34.383 fused_ordering(805) 00:15:34.383 fused_ordering(806) 00:15:34.383 fused_ordering(807) 00:15:34.383 fused_ordering(808) 00:15:34.383 fused_ordering(809) 00:15:34.383 fused_ordering(810) 00:15:34.383 fused_ordering(811) 00:15:34.384 fused_ordering(812) 00:15:34.384 fused_ordering(813) 00:15:34.384 fused_ordering(814) 00:15:34.384 fused_ordering(815) 00:15:34.384 fused_ordering(816) 00:15:34.384 fused_ordering(817) 00:15:34.384 fused_ordering(818) 00:15:34.384 fused_ordering(819) 00:15:34.384 fused_ordering(820) 00:15:34.643 fused_ordering(821) 00:15:34.643 fused_ordering(822) 00:15:34.643 fused_ordering(823) 00:15:34.643 fused_ordering(824) 00:15:34.643 fused_ordering(825) 00:15:34.643 fused_ordering(826) 00:15:34.643 fused_ordering(827) 00:15:34.643 fused_ordering(828) 00:15:34.643 fused_ordering(829) 00:15:34.643 fused_ordering(830) 00:15:34.643 fused_ordering(831) 00:15:34.643 fused_ordering(832) 00:15:34.643 fused_ordering(833) 00:15:34.643 fused_ordering(834) 00:15:34.643 fused_ordering(835) 00:15:34.643 fused_ordering(836) 00:15:34.643 fused_ordering(837) 00:15:34.643 fused_ordering(838) 00:15:34.643 fused_ordering(839) 00:15:34.643 fused_ordering(840) 00:15:34.643 fused_ordering(841) 00:15:34.643 fused_ordering(842) 00:15:34.643 fused_ordering(843) 00:15:34.643 fused_ordering(844) 00:15:34.643 fused_ordering(845) 00:15:34.643 fused_ordering(846) 00:15:34.643 fused_ordering(847) 00:15:34.643 fused_ordering(848) 00:15:34.643 fused_ordering(849) 00:15:34.643 fused_ordering(850) 00:15:34.643 fused_ordering(851) 00:15:34.643 fused_ordering(852) 00:15:34.643 fused_ordering(853) 00:15:34.643 fused_ordering(854) 00:15:34.643 fused_ordering(855) 00:15:34.643 fused_ordering(856) 00:15:34.643 fused_ordering(857) 00:15:34.643 fused_ordering(858) 00:15:34.643 fused_ordering(859) 00:15:34.643 fused_ordering(860) 00:15:34.643 fused_ordering(861) 00:15:34.643 fused_ordering(862) 00:15:34.643 fused_ordering(863) 00:15:34.643 fused_ordering(864) 00:15:34.643 fused_ordering(865) 00:15:34.643 fused_ordering(866) 00:15:34.643 fused_ordering(867) 00:15:34.643 fused_ordering(868) 00:15:34.643 fused_ordering(869) 00:15:34.643 fused_ordering(870) 00:15:34.643 fused_ordering(871) 00:15:34.643 fused_ordering(872) 00:15:34.643 fused_ordering(873) 00:15:34.643 fused_ordering(874) 00:15:34.643 fused_ordering(875) 00:15:34.643 fused_ordering(876) 00:15:34.643 fused_ordering(877) 00:15:34.643 fused_ordering(878) 00:15:34.643 fused_ordering(879) 00:15:34.643 fused_ordering(880) 00:15:34.643 fused_ordering(881) 00:15:34.643 fused_ordering(882) 00:15:34.643 fused_ordering(883) 00:15:34.643 fused_ordering(884) 00:15:34.643 fused_ordering(885) 00:15:34.643 fused_ordering(886) 00:15:34.643 fused_ordering(887) 00:15:34.643 fused_ordering(888) 00:15:34.643 fused_ordering(889) 00:15:34.643 fused_ordering(890) 00:15:34.643 fused_ordering(891) 00:15:34.643 fused_ordering(892) 00:15:34.643 fused_ordering(893) 00:15:34.643 fused_ordering(894) 00:15:34.643 fused_ordering(895) 00:15:34.643 fused_ordering(896) 00:15:34.643 fused_ordering(897) 00:15:34.643 fused_ordering(898) 00:15:34.643 fused_ordering(899) 00:15:34.643 fused_ordering(900) 00:15:34.643 fused_ordering(901) 00:15:34.643 fused_ordering(902) 00:15:34.643 fused_ordering(903) 00:15:34.643 fused_ordering(904) 00:15:34.643 fused_ordering(905) 00:15:34.643 fused_ordering(906) 00:15:34.643 fused_ordering(907) 00:15:34.643 fused_ordering(908) 00:15:34.643 fused_ordering(909) 00:15:34.643 fused_ordering(910) 00:15:34.643 fused_ordering(911) 00:15:34.643 fused_ordering(912) 00:15:34.643 fused_ordering(913) 00:15:34.643 fused_ordering(914) 00:15:34.643 fused_ordering(915) 00:15:34.643 fused_ordering(916) 00:15:34.643 fused_ordering(917) 00:15:34.643 fused_ordering(918) 00:15:34.643 fused_ordering(919) 00:15:34.643 fused_ordering(920) 00:15:34.643 fused_ordering(921) 00:15:34.643 fused_ordering(922) 00:15:34.643 fused_ordering(923) 00:15:34.643 fused_ordering(924) 00:15:34.643 fused_ordering(925) 00:15:34.643 fused_ordering(926) 00:15:34.643 fused_ordering(927) 00:15:34.643 fused_ordering(928) 00:15:34.643 fused_ordering(929) 00:15:34.643 fused_ordering(930) 00:15:34.643 fused_ordering(931) 00:15:34.643 fused_ordering(932) 00:15:34.643 fused_ordering(933) 00:15:34.643 fused_ordering(934) 00:15:34.643 fused_ordering(935) 00:15:34.643 fused_ordering(936) 00:15:34.643 fused_ordering(937) 00:15:34.643 fused_ordering(938) 00:15:34.643 fused_ordering(939) 00:15:34.643 fused_ordering(940) 00:15:34.643 fused_ordering(941) 00:15:34.643 fused_ordering(942) 00:15:34.643 fused_ordering(943) 00:15:34.643 fused_ordering(944) 00:15:34.643 fused_ordering(945) 00:15:34.643 fused_ordering(946) 00:15:34.643 fused_ordering(947) 00:15:34.643 fused_ordering(948) 00:15:34.643 fused_ordering(949) 00:15:34.643 fused_ordering(950) 00:15:34.643 fused_ordering(951) 00:15:34.643 fused_ordering(952) 00:15:34.643 fused_ordering(953) 00:15:34.643 fused_ordering(954) 00:15:34.643 fused_ordering(955) 00:15:34.643 fused_ordering(956) 00:15:34.643 fused_ordering(957) 00:15:34.643 fused_ordering(958) 00:15:34.643 fused_ordering(959) 00:15:34.643 fused_ordering(960) 00:15:34.643 fused_ordering(961) 00:15:34.643 fused_ordering(962) 00:15:34.643 fused_ordering(963) 00:15:34.643 fused_ordering(964) 00:15:34.643 fused_ordering(965) 00:15:34.643 fused_ordering(966) 00:15:34.643 fused_ordering(967) 00:15:34.643 fused_ordering(968) 00:15:34.643 fused_ordering(969) 00:15:34.643 fused_ordering(970) 00:15:34.643 fused_ordering(971) 00:15:34.643 fused_ordering(972) 00:15:34.643 fused_ordering(973) 00:15:34.643 fused_ordering(974) 00:15:34.644 fused_ordering(975) 00:15:34.644 fused_ordering(976) 00:15:34.644 fused_ordering(977) 00:15:34.644 fused_ordering(978) 00:15:34.644 fused_ordering(979) 00:15:34.644 fused_ordering(980) 00:15:34.644 fused_ordering(981) 00:15:34.644 fused_ordering(982) 00:15:34.644 fused_ordering(983) 00:15:34.644 fused_ordering(984) 00:15:34.644 fused_ordering(985) 00:15:34.644 fused_ordering(986) 00:15:34.644 fused_ordering(987) 00:15:34.644 fused_ordering(988) 00:15:34.644 fused_ordering(989) 00:15:34.644 fused_ordering(990) 00:15:34.644 fused_ordering(991) 00:15:34.644 fused_ordering(992) 00:15:34.644 fused_ordering(993) 00:15:34.644 fused_ordering(994) 00:15:34.644 fused_ordering(995) 00:15:34.644 fused_ordering(996) 00:15:34.644 fused_ordering(997) 00:15:34.644 fused_ordering(998) 00:15:34.644 fused_ordering(999) 00:15:34.644 fused_ordering(1000) 00:15:34.644 fused_ordering(1001) 00:15:34.644 fused_ordering(1002) 00:15:34.644 fused_ordering(1003) 00:15:34.644 fused_ordering(1004) 00:15:34.644 fused_ordering(1005) 00:15:34.644 fused_ordering(1006) 00:15:34.644 fused_ordering(1007) 00:15:34.644 fused_ordering(1008) 00:15:34.644 fused_ordering(1009) 00:15:34.644 fused_ordering(1010) 00:15:34.644 fused_ordering(1011) 00:15:34.644 fused_ordering(1012) 00:15:34.644 fused_ordering(1013) 00:15:34.644 fused_ordering(1014) 00:15:34.644 fused_ordering(1015) 00:15:34.644 fused_ordering(1016) 00:15:34.644 fused_ordering(1017) 00:15:34.644 fused_ordering(1018) 00:15:34.644 fused_ordering(1019) 00:15:34.644 fused_ordering(1020) 00:15:34.644 fused_ordering(1021) 00:15:34.644 fused_ordering(1022) 00:15:34.644 fused_ordering(1023) 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:34.644 rmmod nvme_tcp 00:15:34.644 rmmod nvme_fabrics 00:15:34.644 rmmod nvme_keyring 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 879127 ']' 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 879127 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 879127 ']' 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 879127 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.644 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 879127 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 879127' 00:15:34.903 killing process with pid 879127 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 879127 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 879127 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.903 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:37.441 00:15:37.441 real 0m10.711s 00:15:37.441 user 0m4.943s 00:15:37.441 sys 0m5.710s 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.441 ************************************ 00:15:37.441 END TEST nvmf_fused_ordering 00:15:37.441 ************************************ 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.441 ************************************ 00:15:37.441 START TEST nvmf_ns_masking 00:15:37.441 ************************************ 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:37.441 * Looking for test storage... 00:15:37.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.441 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.442 --rc genhtml_branch_coverage=1 00:15:37.442 --rc genhtml_function_coverage=1 00:15:37.442 --rc genhtml_legend=1 00:15:37.442 --rc geninfo_all_blocks=1 00:15:37.442 --rc geninfo_unexecuted_blocks=1 00:15:37.442 00:15:37.442 ' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.442 --rc genhtml_branch_coverage=1 00:15:37.442 --rc genhtml_function_coverage=1 00:15:37.442 --rc genhtml_legend=1 00:15:37.442 --rc geninfo_all_blocks=1 00:15:37.442 --rc geninfo_unexecuted_blocks=1 00:15:37.442 00:15:37.442 ' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.442 --rc genhtml_branch_coverage=1 00:15:37.442 --rc genhtml_function_coverage=1 00:15:37.442 --rc genhtml_legend=1 00:15:37.442 --rc geninfo_all_blocks=1 00:15:37.442 --rc geninfo_unexecuted_blocks=1 00:15:37.442 00:15:37.442 ' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.442 --rc genhtml_branch_coverage=1 00:15:37.442 --rc genhtml_function_coverage=1 00:15:37.442 --rc genhtml_legend=1 00:15:37.442 --rc geninfo_all_blocks=1 00:15:37.442 --rc geninfo_unexecuted_blocks=1 00:15:37.442 00:15:37.442 ' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:37.442 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ad1d0798-c866-4e59-9910-a7e9bcf0f63a 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5a53f3d4-b83e-4660-ac78-0840b1035941 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=10fd6526-f06f-4014-8c74-d24c0c90c2cd 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:37.443 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:15:44.015 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:15:44.015 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.015 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:15:44.016 Found net devices under 0000:1a:00.0: cvl_0_0 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:15:44.016 Found net devices under 0000:1a:00.1: cvl_0_1 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.016 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:44.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:15:44.016 00:15:44.016 --- 10.0.0.2 ping statistics --- 00:15:44.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.016 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:15:44.016 00:15:44.016 --- 10.0.0.1 ping statistics --- 00:15:44.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.016 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=883233 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 883233 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 883233 ']' 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 [2024-11-20 12:29:49.278202] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:44.016 [2024-11-20 12:29:49.278247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.016 [2024-11-20 12:29:49.355987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.016 [2024-11-20 12:29:49.393063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.016 [2024-11-20 12:29:49.393093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.016 [2024-11-20 12:29:49.393099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.016 [2024-11-20 12:29:49.393104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.016 [2024-11-20 12:29:49.393109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.016 [2024-11-20 12:29:49.393690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:44.016 [2024-11-20 12:29:49.687632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:44.016 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:44.276 Malloc1 00:15:44.276 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:44.534 Malloc2 00:15:44.534 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:44.534 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:44.793 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.052 [2024-11-20 12:29:50.642742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 10fd6526-f06f-4014-8c74-d24c0c90c2cd -a 10.0.0.2 -s 4420 -i 4 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:45.052 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.588 [ 0]:0x1 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d713f135b0b84eaba91c9a755ca11a0b 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d713f135b0b84eaba91c9a755ca11a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.588 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.588 [ 0]:0x1 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d713f135b0b84eaba91c9a755ca11a0b 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d713f135b0b84eaba91c9a755ca11a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.588 [ 1]:0x2 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.588 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.848 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 10fd6526-f06f-4014-8c74-d24c0c90c2cd -a 10.0.0.2 -s 4420 -i 4 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:48.107 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.641 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.642 [ 0]:0x2 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.642 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.642 [ 0]:0x1 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d713f135b0b84eaba91c9a755ca11a0b 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d713f135b0b84eaba91c9a755ca11a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.642 [ 1]:0x2 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.642 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.901 [ 0]:0x2 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.901 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.161 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:51.161 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.161 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:51.161 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.161 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:51.419 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:51.419 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 10fd6526-f06f-4014-8c74-d24c0c90c2cd -a 10.0.0.2 -s 4420 -i 4 00:15:51.419 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:51.419 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:51.419 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.419 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:51.419 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:51.419 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.961 [ 0]:0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d713f135b0b84eaba91c9a755ca11a0b 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d713f135b0b84eaba91c9a755ca11a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:53.961 [ 1]:0x2 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:53.961 [ 0]:0x2 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:53.961 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:54.220 [2024-11-20 12:29:59.776094] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:54.220 request: 00:15:54.220 { 00:15:54.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.220 "nsid": 2, 00:15:54.220 "host": "nqn.2016-06.io.spdk:host1", 00:15:54.220 "method": "nvmf_ns_remove_host", 00:15:54.220 "req_id": 1 00:15:54.220 } 00:15:54.220 Got JSON-RPC error response 00:15:54.220 response: 00:15:54.220 { 00:15:54.221 "code": -32602, 00:15:54.221 "message": "Invalid parameters" 00:15:54.221 } 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:54.221 [ 0]:0x2 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01553936a4e94aef8fa334692a02ead6 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01553936a4e94aef8fa334692a02ead6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:54.221 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=885260 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 885260 /var/tmp/host.sock 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 885260 ']' 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:54.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.480 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:54.480 [2024-11-20 12:30:00.142378] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:54.480 [2024-11-20 12:30:00.142431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885260 ] 00:15:54.480 [2024-11-20 12:30:00.214616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.738 [2024-11-20 12:30:00.254873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.307 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.307 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:55.307 12:30:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.565 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.823 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ad1d0798-c866-4e59-9910-a7e9bcf0f63a 00:15:55.823 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:55.823 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AD1D0798C8664E599910A7E9BCF0F63A -i 00:15:55.823 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5a53f3d4-b83e-4660-ac78-0840b1035941 00:15:55.823 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:55.823 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5A53F3D4B83E4660AC780840B1035941 -i 00:15:56.082 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:56.340 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:56.340 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:56.340 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:56.599 nvme0n1 00:15:56.599 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:56.599 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:57.167 nvme1n2 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:57.167 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:57.426 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ad1d0798-c866-4e59-9910-a7e9bcf0f63a == \a\d\1\d\0\7\9\8\-\c\8\6\6\-\4\e\5\9\-\9\9\1\0\-\a\7\e\9\b\c\f\0\f\6\3\a ]] 00:15:57.426 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:57.426 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:57.426 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:57.684 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5a53f3d4-b83e-4660-ac78-0840b1035941 == \5\a\5\3\f\3\d\4\-\b\8\3\e\-\4\6\6\0\-\a\c\7\8\-\0\8\4\0\b\1\0\3\5\9\4\1 ]] 00:15:57.685 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.685 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ad1d0798-c866-4e59-9910-a7e9bcf0f63a 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AD1D0798C8664E599910A7E9BCF0F63A 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AD1D0798C8664E599910A7E9BCF0F63A 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:57.944 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AD1D0798C8664E599910A7E9BCF0F63A 00:15:58.203 [2024-11-20 12:30:03.730929] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:58.203 [2024-11-20 12:30:03.730961] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:58.203 [2024-11-20 12:30:03.730969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.203 request: 00:15:58.203 { 00:15:58.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.203 "namespace": { 00:15:58.203 "bdev_name": "invalid", 00:15:58.203 "nsid": 1, 00:15:58.203 "nguid": "AD1D0798C8664E599910A7E9BCF0F63A", 00:15:58.203 "no_auto_visible": false 00:15:58.203 }, 00:15:58.203 "method": "nvmf_subsystem_add_ns", 00:15:58.203 "req_id": 1 00:15:58.203 } 00:15:58.203 Got JSON-RPC error response 00:15:58.203 response: 00:15:58.203 { 00:15:58.203 "code": -32602, 00:15:58.203 "message": "Invalid parameters" 00:15:58.203 } 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ad1d0798-c866-4e59-9910-a7e9bcf0f63a 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AD1D0798C8664E599910A7E9BCF0F63A -i 00:15:58.203 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:00.736 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:00.736 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:00.736 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 885260 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 885260 ']' 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 885260 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885260 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885260' 00:16:00.736 killing process with pid 885260 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 885260 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 885260 00:16:00.736 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.995 rmmod nvme_tcp 00:16:00.995 rmmod nvme_fabrics 00:16:00.995 rmmod nvme_keyring 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 883233 ']' 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 883233 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 883233 ']' 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 883233 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.995 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883233 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883233' 00:16:01.255 killing process with pid 883233 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 883233 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 883233 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.255 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:03.796 00:16:03.796 real 0m26.282s 00:16:03.796 user 0m31.103s 00:16:03.796 sys 0m7.258s 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:03.796 ************************************ 00:16:03.796 END TEST nvmf_ns_masking 00:16:03.796 ************************************ 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.796 ************************************ 00:16:03.796 START TEST nvmf_nvme_cli 00:16:03.796 ************************************ 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:03.796 * Looking for test storage... 00:16:03.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.796 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:03.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.797 --rc genhtml_branch_coverage=1 00:16:03.797 --rc genhtml_function_coverage=1 00:16:03.797 --rc genhtml_legend=1 00:16:03.797 --rc geninfo_all_blocks=1 00:16:03.797 --rc geninfo_unexecuted_blocks=1 00:16:03.797 00:16:03.797 ' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:03.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.797 --rc genhtml_branch_coverage=1 00:16:03.797 --rc genhtml_function_coverage=1 00:16:03.797 --rc genhtml_legend=1 00:16:03.797 --rc geninfo_all_blocks=1 00:16:03.797 --rc geninfo_unexecuted_blocks=1 00:16:03.797 00:16:03.797 ' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:03.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.797 --rc genhtml_branch_coverage=1 00:16:03.797 --rc genhtml_function_coverage=1 00:16:03.797 --rc genhtml_legend=1 00:16:03.797 --rc geninfo_all_blocks=1 00:16:03.797 --rc geninfo_unexecuted_blocks=1 00:16:03.797 00:16:03.797 ' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:03.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.797 --rc genhtml_branch_coverage=1 00:16:03.797 --rc genhtml_function_coverage=1 00:16:03.797 --rc genhtml_legend=1 00:16:03.797 --rc geninfo_all_blocks=1 00:16:03.797 --rc geninfo_unexecuted_blocks=1 00:16:03.797 00:16:03.797 ' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:03.797 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:16:10.370 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:16:10.370 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:10.370 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:16:10.371 Found net devices under 0000:1a:00.0: cvl_0_0 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:16:10.371 Found net devices under 0000:1a:00.1: cvl_0_1 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:10.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:16:10.371 00:16:10.371 --- 10.0.0.2 ping statistics --- 00:16:10.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.371 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:16:10.371 00:16:10.371 --- 10.0.0.1 ping statistics --- 00:16:10.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.371 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=890666 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 890666 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 890666 ']' 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.371 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 [2024-11-20 12:30:15.553304] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:16:10.371 [2024-11-20 12:30:15.553353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.371 [2024-11-20 12:30:15.634024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.371 [2024-11-20 12:30:15.676088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.371 [2024-11-20 12:30:15.676122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.371 [2024-11-20 12:30:15.676129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.371 [2024-11-20 12:30:15.676134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.371 [2024-11-20 12:30:15.676139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.371 [2024-11-20 12:30:15.677804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.371 [2024-11-20 12:30:15.677920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.371 [2024-11-20 12:30:15.677968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.371 [2024-11-20 12:30:15.677968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.631 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.631 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:10.631 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.631 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.631 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 [2024-11-20 12:30:16.419337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 Malloc0 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 Malloc1 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 [2024-11-20 12:30:16.513368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.890 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.891 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:10.891 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.891 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.891 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.891 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:11.150 00:16:11.150 Discovery Log Number of Records 2, Generation counter 2 00:16:11.150 =====Discovery Log Entry 0====== 00:16:11.150 trtype: tcp 00:16:11.150 adrfam: ipv4 00:16:11.150 subtype: current discovery subsystem 00:16:11.150 treq: not required 00:16:11.150 portid: 0 00:16:11.150 trsvcid: 4420 00:16:11.150 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:11.150 traddr: 10.0.0.2 00:16:11.150 eflags: explicit discovery connections, duplicate discovery information 00:16:11.150 sectype: none 00:16:11.150 =====Discovery Log Entry 1====== 00:16:11.150 trtype: tcp 00:16:11.150 adrfam: ipv4 00:16:11.150 subtype: nvme subsystem 00:16:11.150 treq: not required 00:16:11.150 portid: 0 00:16:11.150 trsvcid: 4420 00:16:11.150 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:11.150 traddr: 10.0.0.2 00:16:11.150 eflags: none 00:16:11.150 sectype: none 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:11.150 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.529 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:12.529 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.529 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.529 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:12.529 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:12.529 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.434 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:14.693 /dev/nvme0n2 ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:14.693 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:14.694 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:14.694 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:14.694 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.952 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.952 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:14.953 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:14.953 rmmod nvme_tcp 00:16:15.213 rmmod nvme_fabrics 00:16:15.213 rmmod nvme_keyring 00:16:15.213 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:15.213 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 890666 ']' 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 890666 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 890666 ']' 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 890666 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 890666 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 890666' 00:16:15.214 killing process with pid 890666 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 890666 00:16:15.214 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 890666 00:16:15.472 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:15.472 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:15.472 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:15.472 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:15.472 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.473 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.372 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:17.372 00:16:17.372 real 0m14.002s 00:16:17.372 user 0m23.216s 00:16:17.372 sys 0m5.335s 00:16:17.372 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.372 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:17.372 ************************************ 00:16:17.372 END TEST nvmf_nvme_cli 00:16:17.372 ************************************ 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.630 ************************************ 00:16:17.630 START TEST nvmf_vfio_user 00:16:17.630 ************************************ 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:17.630 * Looking for test storage... 00:16:17.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.630 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.631 --rc genhtml_branch_coverage=1 00:16:17.631 --rc genhtml_function_coverage=1 00:16:17.631 --rc genhtml_legend=1 00:16:17.631 --rc geninfo_all_blocks=1 00:16:17.631 --rc geninfo_unexecuted_blocks=1 00:16:17.631 00:16:17.631 ' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.631 --rc genhtml_branch_coverage=1 00:16:17.631 --rc genhtml_function_coverage=1 00:16:17.631 --rc genhtml_legend=1 00:16:17.631 --rc geninfo_all_blocks=1 00:16:17.631 --rc geninfo_unexecuted_blocks=1 00:16:17.631 00:16:17.631 ' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.631 --rc genhtml_branch_coverage=1 00:16:17.631 --rc genhtml_function_coverage=1 00:16:17.631 --rc genhtml_legend=1 00:16:17.631 --rc geninfo_all_blocks=1 00:16:17.631 --rc geninfo_unexecuted_blocks=1 00:16:17.631 00:16:17.631 ' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:17.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.631 --rc genhtml_branch_coverage=1 00:16:17.631 --rc genhtml_function_coverage=1 00:16:17.631 --rc genhtml_legend=1 00:16:17.631 --rc geninfo_all_blocks=1 00:16:17.631 --rc geninfo_unexecuted_blocks=1 00:16:17.631 00:16:17.631 ' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:17.631 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=892336 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 892336' 00:16:17.890 Process pid: 892336 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 892336 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 892336 ']' 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:17.890 [2024-11-20 12:30:23.439804] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:16:17.890 [2024-11-20 12:30:23.439850] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.890 [2024-11-20 12:30:23.512361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.890 [2024-11-20 12:30:23.551439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.890 [2024-11-20 12:30:23.551476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.890 [2024-11-20 12:30:23.551482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.890 [2024-11-20 12:30:23.551487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.890 [2024-11-20 12:30:23.551491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.890 [2024-11-20 12:30:23.553093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.890 [2024-11-20 12:30:23.553218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.890 [2024-11-20 12:30:23.553334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.890 [2024-11-20 12:30:23.553335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:17.890 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:19.267 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:19.267 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:19.267 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:19.267 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:19.267 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:19.267 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:19.526 Malloc1 00:16:19.526 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:19.526 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:19.820 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:20.107 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:20.107 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:20.107 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:20.107 Malloc2 00:16:20.107 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:20.415 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:20.674 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:20.674 [2024-11-20 12:30:26.400711] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:16:20.674 [2024-11-20 12:30:26.400746] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892890 ] 00:16:20.935 [2024-11-20 12:30:26.439672] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:20.935 [2024-11-20 12:30:26.441943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:20.935 [2024-11-20 12:30:26.441962] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff78afc5000 00:16:20.935 [2024-11-20 12:30:26.442949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.443947] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.444956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.445965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.446960] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.447974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.448974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.449985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:20.935 [2024-11-20 12:30:26.450994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:20.935 [2024-11-20 12:30:26.451003] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff78afba000 00:16:20.935 [2024-11-20 12:30:26.451847] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:20.935 [2024-11-20 12:30:26.463845] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:20.935 [2024-11-20 12:30:26.463870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:20.935 [2024-11-20 12:30:26.466085] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:20.935 [2024-11-20 12:30:26.466117] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:20.935 [2024-11-20 12:30:26.466181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:20.935 [2024-11-20 12:30:26.466196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:20.935 [2024-11-20 12:30:26.466200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:20.935 [2024-11-20 12:30:26.467080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:20.935 [2024-11-20 12:30:26.467088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:20.935 [2024-11-20 12:30:26.467093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:20.935 [2024-11-20 12:30:26.468085] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:20.935 [2024-11-20 12:30:26.468092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:20.935 [2024-11-20 12:30:26.468099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:20.935 [2024-11-20 12:30:26.469092] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:20.935 [2024-11-20 12:30:26.469099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:20.935 [2024-11-20 12:30:26.470098] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:20.935 [2024-11-20 12:30:26.470105] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:20.935 [2024-11-20 12:30:26.470109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:20.935 [2024-11-20 12:30:26.470115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:20.935 [2024-11-20 12:30:26.470223] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:20.935 [2024-11-20 12:30:26.470227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:20.935 [2024-11-20 12:30:26.470231] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:20.935 [2024-11-20 12:30:26.471104] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:20.935 [2024-11-20 12:30:26.472107] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:20.935 [2024-11-20 12:30:26.473111] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:20.935 [2024-11-20 12:30:26.474113] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:20.935 [2024-11-20 12:30:26.474169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:20.935 [2024-11-20 12:30:26.477417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:20.936 [2024-11-20 12:30:26.477424] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:20.936 [2024-11-20 12:30:26.477428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:20.936 [2024-11-20 12:30:26.477450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477465] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:20.936 [2024-11-20 12:30:26.477469] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:20.936 [2024-11-20 12:30:26.477473] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.936 [2024-11-20 12:30:26.477485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477535] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:20.936 [2024-11-20 12:30:26.477539] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:20.936 [2024-11-20 12:30:26.477542] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:20.936 [2024-11-20 12:30:26.477546] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:20.936 [2024-11-20 12:30:26.477552] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:20.936 [2024-11-20 12:30:26.477556] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:20.936 [2024-11-20 12:30:26.477560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.936 [2024-11-20 12:30:26.477606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.936 [2024-11-20 12:30:26.477613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.936 [2024-11-20 12:30:26.477620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.936 [2024-11-20 12:30:26.477623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477654] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:20.936 [2024-11-20 12:30:26.477659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477742] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:20.936 [2024-11-20 12:30:26.477746] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:20.936 [2024-11-20 12:30:26.477749] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.936 [2024-11-20 12:30:26.477754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477780] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:20.936 [2024-11-20 12:30:26.477787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477801] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:20.936 [2024-11-20 12:30:26.477804] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:20.936 [2024-11-20 12:30:26.477807] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.936 [2024-11-20 12:30:26.477812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477849] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:20.936 [2024-11-20 12:30:26.477853] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:20.936 [2024-11-20 12:30:26.477855] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.936 [2024-11-20 12:30:26.477860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477907] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:20.936 [2024-11-20 12:30:26.477911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:20.936 [2024-11-20 12:30:26.477915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:20.936 [2024-11-20 12:30:26.477931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477948] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.477986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:20.936 [2024-11-20 12:30:26.477997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:20.936 [2024-11-20 12:30:26.478007] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:20.936 [2024-11-20 12:30:26.478011] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:20.936 [2024-11-20 12:30:26.478014] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:20.936 [2024-11-20 12:30:26.478017] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:20.937 [2024-11-20 12:30:26.478019] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:20.937 [2024-11-20 12:30:26.478025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:20.937 [2024-11-20 12:30:26.478030] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:20.937 [2024-11-20 12:30:26.478034] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:20.937 [2024-11-20 12:30:26.478037] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.937 [2024-11-20 12:30:26.478042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:20.937 [2024-11-20 12:30:26.478047] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:20.937 [2024-11-20 12:30:26.478051] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:20.937 [2024-11-20 12:30:26.478054] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.937 [2024-11-20 12:30:26.478059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:20.937 [2024-11-20 12:30:26.478065] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:20.937 [2024-11-20 12:30:26.478068] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:20.937 [2024-11-20 12:30:26.478071] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:20.937 [2024-11-20 12:30:26.478076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:20.937 [2024-11-20 12:30:26.478082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:20.937 [2024-11-20 12:30:26.478092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:20.937 [2024-11-20 12:30:26.478101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:20.937 [2024-11-20 12:30:26.478106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:20.937 ===================================================== 00:16:20.937 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:20.937 ===================================================== 00:16:20.937 Controller Capabilities/Features 00:16:20.937 ================================ 00:16:20.937 Vendor ID: 4e58 00:16:20.937 Subsystem Vendor ID: 4e58 00:16:20.937 Serial Number: SPDK1 00:16:20.937 Model Number: SPDK bdev Controller 00:16:20.937 Firmware Version: 25.01 00:16:20.937 Recommended Arb Burst: 6 00:16:20.937 IEEE OUI Identifier: 8d 6b 50 00:16:20.937 Multi-path I/O 00:16:20.937 May have multiple subsystem ports: Yes 00:16:20.937 May have multiple controllers: Yes 00:16:20.937 Associated with SR-IOV VF: No 00:16:20.937 Max Data Transfer Size: 131072 00:16:20.937 Max Number of Namespaces: 32 00:16:20.937 Max Number of I/O Queues: 127 00:16:20.937 NVMe Specification Version (VS): 1.3 00:16:20.937 NVMe Specification Version (Identify): 1.3 00:16:20.937 Maximum Queue Entries: 256 00:16:20.937 Contiguous Queues Required: Yes 00:16:20.937 Arbitration Mechanisms Supported 00:16:20.937 Weighted Round Robin: Not Supported 00:16:20.937 Vendor Specific: Not Supported 00:16:20.937 Reset Timeout: 15000 ms 00:16:20.937 Doorbell Stride: 4 bytes 00:16:20.937 NVM Subsystem Reset: Not Supported 00:16:20.937 Command Sets Supported 00:16:20.937 NVM Command Set: Supported 00:16:20.937 Boot Partition: Not Supported 00:16:20.937 Memory Page Size Minimum: 4096 bytes 00:16:20.937 Memory Page Size Maximum: 4096 bytes 00:16:20.937 Persistent Memory Region: Not Supported 00:16:20.937 Optional Asynchronous Events Supported 00:16:20.937 Namespace Attribute Notices: Supported 00:16:20.937 Firmware Activation Notices: Not Supported 00:16:20.937 ANA Change Notices: Not Supported 00:16:20.937 PLE Aggregate Log Change Notices: Not Supported 00:16:20.937 LBA Status Info Alert Notices: Not Supported 00:16:20.937 EGE Aggregate Log Change Notices: Not Supported 00:16:20.937 Normal NVM Subsystem Shutdown event: Not Supported 00:16:20.937 Zone Descriptor Change Notices: Not Supported 00:16:20.937 Discovery Log Change Notices: Not Supported 00:16:20.937 Controller Attributes 00:16:20.937 128-bit Host Identifier: Supported 00:16:20.937 Non-Operational Permissive Mode: Not Supported 00:16:20.937 NVM Sets: Not Supported 00:16:20.937 Read Recovery Levels: Not Supported 00:16:20.937 Endurance Groups: Not Supported 00:16:20.937 Predictable Latency Mode: Not Supported 00:16:20.937 Traffic Based Keep ALive: Not Supported 00:16:20.937 Namespace Granularity: Not Supported 00:16:20.937 SQ Associations: Not Supported 00:16:20.937 UUID List: Not Supported 00:16:20.937 Multi-Domain Subsystem: Not Supported 00:16:20.937 Fixed Capacity Management: Not Supported 00:16:20.937 Variable Capacity Management: Not Supported 00:16:20.937 Delete Endurance Group: Not Supported 00:16:20.937 Delete NVM Set: Not Supported 00:16:20.937 Extended LBA Formats Supported: Not Supported 00:16:20.937 Flexible Data Placement Supported: Not Supported 00:16:20.937 00:16:20.937 Controller Memory Buffer Support 00:16:20.937 ================================ 00:16:20.937 Supported: No 00:16:20.937 00:16:20.937 Persistent Memory Region Support 00:16:20.937 ================================ 00:16:20.937 Supported: No 00:16:20.937 00:16:20.937 Admin Command Set Attributes 00:16:20.937 ============================ 00:16:20.937 Security Send/Receive: Not Supported 00:16:20.937 Format NVM: Not Supported 00:16:20.937 Firmware Activate/Download: Not Supported 00:16:20.937 Namespace Management: Not Supported 00:16:20.937 Device Self-Test: Not Supported 00:16:20.937 Directives: Not Supported 00:16:20.937 NVMe-MI: Not Supported 00:16:20.937 Virtualization Management: Not Supported 00:16:20.937 Doorbell Buffer Config: Not Supported 00:16:20.937 Get LBA Status Capability: Not Supported 00:16:20.937 Command & Feature Lockdown Capability: Not Supported 00:16:20.937 Abort Command Limit: 4 00:16:20.937 Async Event Request Limit: 4 00:16:20.937 Number of Firmware Slots: N/A 00:16:20.937 Firmware Slot 1 Read-Only: N/A 00:16:20.937 Firmware Activation Without Reset: N/A 00:16:20.937 Multiple Update Detection Support: N/A 00:16:20.937 Firmware Update Granularity: No Information Provided 00:16:20.937 Per-Namespace SMART Log: No 00:16:20.937 Asymmetric Namespace Access Log Page: Not Supported 00:16:20.937 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:20.937 Command Effects Log Page: Supported 00:16:20.937 Get Log Page Extended Data: Supported 00:16:20.937 Telemetry Log Pages: Not Supported 00:16:20.937 Persistent Event Log Pages: Not Supported 00:16:20.937 Supported Log Pages Log Page: May Support 00:16:20.937 Commands Supported & Effects Log Page: Not Supported 00:16:20.937 Feature Identifiers & Effects Log Page:May Support 00:16:20.937 NVMe-MI Commands & Effects Log Page: May Support 00:16:20.937 Data Area 4 for Telemetry Log: Not Supported 00:16:20.937 Error Log Page Entries Supported: 128 00:16:20.937 Keep Alive: Supported 00:16:20.937 Keep Alive Granularity: 10000 ms 00:16:20.937 00:16:20.937 NVM Command Set Attributes 00:16:20.937 ========================== 00:16:20.937 Submission Queue Entry Size 00:16:20.937 Max: 64 00:16:20.937 Min: 64 00:16:20.937 Completion Queue Entry Size 00:16:20.937 Max: 16 00:16:20.937 Min: 16 00:16:20.937 Number of Namespaces: 32 00:16:20.937 Compare Command: Supported 00:16:20.937 Write Uncorrectable Command: Not Supported 00:16:20.937 Dataset Management Command: Supported 00:16:20.937 Write Zeroes Command: Supported 00:16:20.937 Set Features Save Field: Not Supported 00:16:20.937 Reservations: Not Supported 00:16:20.937 Timestamp: Not Supported 00:16:20.937 Copy: Supported 00:16:20.937 Volatile Write Cache: Present 00:16:20.937 Atomic Write Unit (Normal): 1 00:16:20.937 Atomic Write Unit (PFail): 1 00:16:20.937 Atomic Compare & Write Unit: 1 00:16:20.937 Fused Compare & Write: Supported 00:16:20.937 Scatter-Gather List 00:16:20.937 SGL Command Set: Supported (Dword aligned) 00:16:20.937 SGL Keyed: Not Supported 00:16:20.937 SGL Bit Bucket Descriptor: Not Supported 00:16:20.937 SGL Metadata Pointer: Not Supported 00:16:20.937 Oversized SGL: Not Supported 00:16:20.937 SGL Metadata Address: Not Supported 00:16:20.937 SGL Offset: Not Supported 00:16:20.937 Transport SGL Data Block: Not Supported 00:16:20.937 Replay Protected Memory Block: Not Supported 00:16:20.937 00:16:20.937 Firmware Slot Information 00:16:20.937 ========================= 00:16:20.937 Active slot: 1 00:16:20.937 Slot 1 Firmware Revision: 25.01 00:16:20.937 00:16:20.937 00:16:20.937 Commands Supported and Effects 00:16:20.937 ============================== 00:16:20.937 Admin Commands 00:16:20.937 -------------- 00:16:20.937 Get Log Page (02h): Supported 00:16:20.937 Identify (06h): Supported 00:16:20.937 Abort (08h): Supported 00:16:20.937 Set Features (09h): Supported 00:16:20.937 Get Features (0Ah): Supported 00:16:20.938 Asynchronous Event Request (0Ch): Supported 00:16:20.938 Keep Alive (18h): Supported 00:16:20.938 I/O Commands 00:16:20.938 ------------ 00:16:20.938 Flush (00h): Supported LBA-Change 00:16:20.938 Write (01h): Supported LBA-Change 00:16:20.938 Read (02h): Supported 00:16:20.938 Compare (05h): Supported 00:16:20.938 Write Zeroes (08h): Supported LBA-Change 00:16:20.938 Dataset Management (09h): Supported LBA-Change 00:16:20.938 Copy (19h): Supported LBA-Change 00:16:20.938 00:16:20.938 Error Log 00:16:20.938 ========= 00:16:20.938 00:16:20.938 Arbitration 00:16:20.938 =========== 00:16:20.938 Arbitration Burst: 1 00:16:20.938 00:16:20.938 Power Management 00:16:20.938 ================ 00:16:20.938 Number of Power States: 1 00:16:20.938 Current Power State: Power State #0 00:16:20.938 Power State #0: 00:16:20.938 Max Power: 0.00 W 00:16:20.938 Non-Operational State: Operational 00:16:20.938 Entry Latency: Not Reported 00:16:20.938 Exit Latency: Not Reported 00:16:20.938 Relative Read Throughput: 0 00:16:20.938 Relative Read Latency: 0 00:16:20.938 Relative Write Throughput: 0 00:16:20.938 Relative Write Latency: 0 00:16:20.938 Idle Power: Not Reported 00:16:20.938 Active Power: Not Reported 00:16:20.938 Non-Operational Permissive Mode: Not Supported 00:16:20.938 00:16:20.938 Health Information 00:16:20.938 ================== 00:16:20.938 Critical Warnings: 00:16:20.938 Available Spare Space: OK 00:16:20.938 Temperature: OK 00:16:20.938 Device Reliability: OK 00:16:20.938 Read Only: No 00:16:20.938 Volatile Memory Backup: OK 00:16:20.938 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:20.938 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:20.938 Available Spare: 0% 00:16:20.938 Available Sp[2024-11-20 12:30:26.478185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:20.938 [2024-11-20 12:30:26.478191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:20.938 [2024-11-20 12:30:26.478214] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:20.938 [2024-11-20 12:30:26.478223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.938 [2024-11-20 12:30:26.478228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.938 [2024-11-20 12:30:26.478233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.938 [2024-11-20 12:30:26.478238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.938 [2024-11-20 12:30:26.479143] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:20.938 [2024-11-20 12:30:26.479152] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:20.938 [2024-11-20 12:30:26.480148] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:20.938 [2024-11-20 12:30:26.480194] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:20.938 [2024-11-20 12:30:26.480200] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:20.938 [2024-11-20 12:30:26.481156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:20.938 [2024-11-20 12:30:26.481165] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:20.938 [2024-11-20 12:30:26.481211] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:20.938 [2024-11-20 12:30:26.482179] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:20.938 are Threshold: 0% 00:16:20.938 Life Percentage Used: 0% 00:16:20.938 Data Units Read: 0 00:16:20.938 Data Units Written: 0 00:16:20.938 Host Read Commands: 0 00:16:20.938 Host Write Commands: 0 00:16:20.938 Controller Busy Time: 0 minutes 00:16:20.938 Power Cycles: 0 00:16:20.938 Power On Hours: 0 hours 00:16:20.938 Unsafe Shutdowns: 0 00:16:20.938 Unrecoverable Media Errors: 0 00:16:20.938 Lifetime Error Log Entries: 0 00:16:20.938 Warning Temperature Time: 0 minutes 00:16:20.938 Critical Temperature Time: 0 minutes 00:16:20.938 00:16:20.938 Number of Queues 00:16:20.938 ================ 00:16:20.938 Number of I/O Submission Queues: 127 00:16:20.938 Number of I/O Completion Queues: 127 00:16:20.938 00:16:20.938 Active Namespaces 00:16:20.938 ================= 00:16:20.938 Namespace ID:1 00:16:20.938 Error Recovery Timeout: Unlimited 00:16:20.938 Command Set Identifier: NVM (00h) 00:16:20.938 Deallocate: Supported 00:16:20.938 Deallocated/Unwritten Error: Not Supported 00:16:20.938 Deallocated Read Value: Unknown 00:16:20.938 Deallocate in Write Zeroes: Not Supported 00:16:20.938 Deallocated Guard Field: 0xFFFF 00:16:20.938 Flush: Supported 00:16:20.938 Reservation: Supported 00:16:20.938 Namespace Sharing Capabilities: Multiple Controllers 00:16:20.938 Size (in LBAs): 131072 (0GiB) 00:16:20.938 Capacity (in LBAs): 131072 (0GiB) 00:16:20.938 Utilization (in LBAs): 131072 (0GiB) 00:16:20.938 NGUID: 8F742AB443CD46758A41146594EFBB42 00:16:20.938 UUID: 8f742ab4-43cd-4675-8a41-146594efbb42 00:16:20.938 Thin Provisioning: Not Supported 00:16:20.938 Per-NS Atomic Units: Yes 00:16:20.938 Atomic Boundary Size (Normal): 0 00:16:20.938 Atomic Boundary Size (PFail): 0 00:16:20.938 Atomic Boundary Offset: 0 00:16:20.938 Maximum Single Source Range Length: 65535 00:16:20.938 Maximum Copy Length: 65535 00:16:20.938 Maximum Source Range Count: 1 00:16:20.938 NGUID/EUI64 Never Reused: No 00:16:20.938 Namespace Write Protected: No 00:16:20.938 Number of LBA Formats: 1 00:16:20.938 Current LBA Format: LBA Format #00 00:16:20.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:20.938 00:16:20.938 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:21.197 [2024-11-20 12:30:26.699242] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:26.471 Initializing NVMe Controllers 00:16:26.471 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:26.471 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:26.471 Initialization complete. Launching workers. 00:16:26.471 ======================================================== 00:16:26.471 Latency(us) 00:16:26.471 Device Information : IOPS MiB/s Average min max 00:16:26.471 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39922.71 155.95 3206.02 871.60 9729.68 00:16:26.471 ======================================================== 00:16:26.471 Total : 39922.71 155.95 3206.02 871.60 9729.68 00:16:26.471 00:16:26.471 [2024-11-20 12:30:31.717513] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:26.471 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:26.471 [2024-11-20 12:30:31.938514] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:31.744 Initializing NVMe Controllers 00:16:31.744 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:31.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:31.744 Initialization complete. Launching workers. 00:16:31.744 ======================================================== 00:16:31.744 Latency(us) 00:16:31.744 Device Information : IOPS MiB/s Average min max 00:16:31.744 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7971.87 5986.29 8978.18 00:16:31.744 ======================================================== 00:16:31.744 Total : 16076.80 62.80 7971.87 5986.29 8978.18 00:16:31.744 00:16:31.744 [2024-11-20 12:30:36.977107] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:31.744 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:31.744 [2024-11-20 12:30:37.177031] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:37.041 [2024-11-20 12:30:42.275883] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:37.041 Initializing NVMe Controllers 00:16:37.041 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:37.041 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:37.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:37.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:37.041 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:37.041 Initialization complete. Launching workers. 00:16:37.041 Starting thread on core 2 00:16:37.041 Starting thread on core 3 00:16:37.041 Starting thread on core 1 00:16:37.041 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:37.041 [2024-11-20 12:30:42.546563] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:40.341 [2024-11-20 12:30:45.606663] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:40.341 Initializing NVMe Controllers 00:16:40.341 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:40.341 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:40.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:40.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:40.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:40.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:40.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:40.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:40.341 Initialization complete. Launching workers. 00:16:40.341 Starting thread on core 1 with urgent priority queue 00:16:40.341 Starting thread on core 2 with urgent priority queue 00:16:40.341 Starting thread on core 3 with urgent priority queue 00:16:40.341 Starting thread on core 0 with urgent priority queue 00:16:40.341 SPDK bdev Controller (SPDK1 ) core 0: 9447.00 IO/s 10.59 secs/100000 ios 00:16:40.341 SPDK bdev Controller (SPDK1 ) core 1: 9021.33 IO/s 11.08 secs/100000 ios 00:16:40.341 SPDK bdev Controller (SPDK1 ) core 2: 9020.33 IO/s 11.09 secs/100000 ios 00:16:40.341 SPDK bdev Controller (SPDK1 ) core 3: 8322.33 IO/s 12.02 secs/100000 ios 00:16:40.341 ======================================================== 00:16:40.341 00:16:40.341 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:40.341 [2024-11-20 12:30:45.875871] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:40.341 Initializing NVMe Controllers 00:16:40.341 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:40.341 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:40.341 Namespace ID: 1 size: 0GB 00:16:40.341 Initialization complete. 00:16:40.341 INFO: using host memory buffer for IO 00:16:40.341 Hello world! 00:16:40.341 [2024-11-20 12:30:45.912094] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:40.341 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:40.600 [2024-11-20 12:30:46.171973] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:41.537 Initializing NVMe Controllers 00:16:41.537 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.537 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.537 Initialization complete. Launching workers. 00:16:41.537 submit (in ns) avg, min, max = 5424.0, 2921.8, 3999093.6 00:16:41.537 complete (in ns) avg, min, max = 18651.0, 1590.9, 5992467.3 00:16:41.537 00:16:41.537 Submit histogram 00:16:41.537 ================ 00:16:41.537 Range in us Cumulative Count 00:16:41.538 2.909 - 2.924: 0.0056% ( 1) 00:16:41.538 2.924 - 2.938: 0.0448% ( 7) 00:16:41.538 2.938 - 2.953: 0.0952% ( 9) 00:16:41.538 2.953 - 2.967: 0.1736% ( 14) 00:16:41.538 2.967 - 2.982: 0.5209% ( 62) 00:16:41.538 2.982 - 2.996: 1.6522% ( 202) 00:16:41.538 2.996 - 3.011: 4.3797% ( 487) 00:16:41.538 3.011 - 3.025: 8.5242% ( 740) 00:16:41.538 3.025 - 3.040: 13.7665% ( 936) 00:16:41.538 3.040 - 3.055: 20.2240% ( 1153) 00:16:41.538 3.055 - 3.069: 26.3456% ( 1093) 00:16:41.538 3.069 - 3.084: 30.6973% ( 777) 00:16:41.538 3.084 - 3.098: 33.5256% ( 505) 00:16:41.538 3.098 - 3.113: 35.8107% ( 408) 00:16:41.538 3.113 - 3.127: 38.5606% ( 491) 00:16:41.538 3.127 - 3.142: 41.1817% ( 468) 00:16:41.538 3.142 - 3.156: 43.6908% ( 448) 00:16:41.538 3.156 - 3.171: 46.4296% ( 489) 00:16:41.538 3.171 - 3.185: 50.6861% ( 760) 00:16:41.538 3.185 - 3.200: 57.9838% ( 1303) 00:16:41.538 3.200 - 3.215: 65.2254% ( 1293) 00:16:41.538 3.215 - 3.229: 71.6438% ( 1146) 00:16:41.538 3.229 - 3.244: 76.8804% ( 935) 00:16:41.538 3.244 - 3.258: 81.5514% ( 834) 00:16:41.538 3.258 - 3.273: 84.7382% ( 569) 00:16:41.538 3.273 - 3.287: 86.6368% ( 339) 00:16:41.538 3.287 - 3.302: 87.5049% ( 155) 00:16:41.538 3.302 - 3.316: 87.9810% ( 85) 00:16:41.538 3.316 - 3.331: 88.3506% ( 66) 00:16:41.538 3.331 - 3.345: 88.9611% ( 109) 00:16:41.538 3.345 - 3.360: 89.6444% ( 122) 00:16:41.538 3.360 - 3.375: 90.4565% ( 145) 00:16:41.538 3.375 - 3.389: 91.1565% ( 125) 00:16:41.538 3.389 - 3.404: 91.7502% ( 106) 00:16:41.538 3.404 - 3.418: 92.4223% ( 120) 00:16:41.538 3.418 - 3.433: 93.0104% ( 105) 00:16:41.538 3.433 - 3.447: 93.4640% ( 81) 00:16:41.538 3.447 - 3.462: 94.1977% ( 131) 00:16:41.538 3.462 - 3.476: 94.9594% ( 136) 00:16:41.538 3.476 - 3.491: 95.8723% ( 163) 00:16:41.538 3.491 - 3.505: 96.6564% ( 140) 00:16:41.538 3.505 - 3.520: 97.4573% ( 143) 00:16:41.538 3.520 - 3.535: 98.0902% ( 113) 00:16:41.538 3.535 - 3.549: 98.5550% ( 83) 00:16:41.538 3.549 - 3.564: 98.8967% ( 61) 00:16:41.538 3.564 - 3.578: 99.1487% ( 45) 00:16:41.538 3.578 - 3.593: 99.3447% ( 35) 00:16:41.538 3.593 - 3.607: 99.4679% ( 22) 00:16:41.538 3.607 - 3.622: 99.5351% ( 12) 00:16:41.538 3.622 - 3.636: 99.6136% ( 14) 00:16:41.538 3.636 - 3.651: 99.6360% ( 4) 00:16:41.538 3.651 - 3.665: 99.6472% ( 2) 00:16:41.538 3.665 - 3.680: 99.6528% ( 1) 00:16:41.538 3.680 - 3.695: 99.6640% ( 2) 00:16:41.538 3.695 - 3.709: 99.6752% ( 2) 00:16:41.538 3.753 - 3.782: 99.6808% ( 1) 00:16:41.538 4.567 - 4.596: 99.6864% ( 1) 00:16:41.538 4.625 - 4.655: 99.6920% ( 1) 00:16:41.538 4.800 - 4.829: 99.6976% ( 1) 00:16:41.538 4.858 - 4.887: 99.7032% ( 1) 00:16:41.538 4.945 - 4.975: 99.7144% ( 2) 00:16:41.538 5.091 - 5.120: 99.7200% ( 1) 00:16:41.538 5.178 - 5.207: 99.7256% ( 1) 00:16:41.538 5.382 - 5.411: 99.7368% ( 2) 00:16:41.538 5.469 - 5.498: 99.7424% ( 1) 00:16:41.538 5.527 - 5.556: 99.7480% ( 1) 00:16:41.538 5.556 - 5.585: 99.7592% ( 2) 00:16:41.538 5.585 - 5.615: 99.7648% ( 1) 00:16:41.538 5.731 - 5.760: 99.7704% ( 1) 00:16:41.538 5.818 - 5.847: 99.7816% ( 2) 00:16:41.538 5.993 - 6.022: 99.7872% ( 1) 00:16:41.538 6.051 - 6.080: 99.7928% ( 1) 00:16:41.538 6.080 - 6.109: 99.7984% ( 1) 00:16:41.538 6.196 - 6.225: 99.8096% ( 2) 00:16:41.538 6.284 - 6.313: 99.8152% ( 1) 00:16:41.538 6.371 - 6.400: 99.8208% ( 1) 00:16:41.538 6.458 - 6.487: 99.8264% ( 1) 00:16:41.538 6.516 - 6.545: 99.8320% ( 1) 00:16:41.538 6.604 - 6.633: 99.8376% ( 1) 00:16:41.538 6.633 - 6.662: 99.8488% ( 2) 00:16:41.538 6.662 - 6.691: 99.8544% ( 1) 00:16:41.538 6.720 - 6.749: 99.8600% ( 1) 00:16:41.538 6.749 - 6.778: 99.8712% ( 2) 00:16:41.538 6.865 - 6.895: 99.8768% ( 1) 00:16:41.538 6.895 - 6.924: 99.8824% ( 1) 00:16:41.538 6.924 - 6.953: 99.8880% ( 1) 00:16:41.538 6.982 - 7.011: 99.8936% ( 1) 00:16:41.538 7.156 - 7.185: 99.8992% ( 1) 00:16:41.538 7.331 - 7.360: 99.9048% ( 1) 00:16:41.538 7.505 - 7.564: 99.9104% ( 1) 00:16:41.538 7.564 - 7.622: 99.9160% ( 1) 00:16:41.538 7.796 - 7.855: 99.9216% ( 1) 00:16:41.538 8.029 - 8.087: 99.9272% ( 1) 00:16:41.538 9.018 - 9.076: 99.9328% ( 1) 00:16:41.538 11.695 - 11.753: 99.9384% ( 1) 00:16:41.538 13.265 - 13.324: 99.9440% ( 1) 00:16:41.538 3991.738 - 4021.527: 100.0000% ( 10) 00:16:41.538 00:16:41.538 Complete histogram 00:16:41.538 ================== 00:16:41.538 Range in us Cumulative Count 00:16:41.538 1.585 - 1.593: 0.0112% ( 2) 00:16:41.538 1.593 - 1.600: 0.0280% ( 3) 00:16:41.538 1.600 - 1.607: 0.0392% ( 2) 00:16:41.538 1.615 - 1.622: 0.0448% ( 1) 00:16:41.538 1.622 - 1.629: 0.0840% ( 7) 00:16:41.538 1.629 - 1.636: 0.4257% ( 61) 00:16:41.538 1.636 - 1.644: 1.6018% ( 210) 00:16:41.538 1.644 - 1.651: 2.9292% ( 237) 00:16:41.538 1.651 - 1.658: 3.5620% ( 113) 00:16:41.538 1.658 - 1.665: 3.8421% ( 50) 00:16:41.538 1.665 - 1.673: 4.0941% ( 45) 00:16:41.538 1.673 - 1.680: 5.4831% ( 248) 00:16:41.538 1.680 - 1.687: 20.7225% ( 2721) 00:16:41.538 1.687 - 1.695: 55.4074% ( 6193) 00:16:41.538 1.695 - 1.702: 79.3447% ( 4274) 00:16:41.538 1.702 - 1.709: 86.5472% ( 1286) 00:16:41.538 1.709 - 1.716: 90.7701% ( 754) 00:16:41.538 1.716 - 1.724: 93.5872% ( 503) 00:16:41.538 1.724 - 1.731: 94.6010% ( 181) 00:16:41.538 1.731 - 1.738: 94.8698% ( 48) 00:16:41.538 1.738 - 1.745: 95.0434% ( 31) 00:16:41.538 1.745 - 1.753: 95.5363% ( 88) 00:16:41.538 1.753 - 1.760: 96.3764% ( 150) 00:16:41.538 1.760 - 1.767: 97.5581% ( 211) 00:16:41.538 1.767 - 1.775: 98.5102% ( 170) 00:16:41.538 1.775 - 1.782: 99.0479% ( 96) 00:16:41.538 1.782 - 1.789: 99.2271% ( 32) 00:16:41.538 1.789 - 1.796: 99.3223% ( 17) 00:16:41.538 1.796 - 1.804: 99.3503% ( 5) 00:16:41.538 1.804 - 1.811: 99.3615% ( 2) 00:16:41.538 1.818 - 1.825: 99.3727% ( 2) 00:16:41.538 1.833 - 1.840: 99.3895% ( 3) 00:16:41.538 1.840 - 1.847: 99.3951% ( 1) 00:16:41.538 1.847 - 1.855: 99.4007% ( 1) 00:16:41.538 1.891 - 1.905: 99.4063% ( 1) 00:16:41.538 1.905 - 1.920: 99.4119% ( 1) 00:16:41.538 1.920 - 1.935: 99.4175% ( 1) 00:16:41.538 2.007 - 2.022: 99.4231% ( 1) 00:16:41.538 2.051 - 2.065: 99.4287% ( 1) 00:16:41.538 2.153 - 2.167: 99.4343% ( 1) 00:16:41.538 2.182 - 2.196: 99.4399% ( 1) 00:16:41.538 3.302 - 3.316: 99.4455% ( 1) 00:16:41.538 3.520 - 3.535: 99.4511% ( 1) 00:16:41.538 3.549 - 3.564: 99.4567% ( 1) 00:16:41.538 3.564 - 3.578: 99.4623% ( 1) 00:16:41.538 3.695 - 3.709: 99.4679% ( 1) 00:16:41.538 3.869 - 3.898: 99.4735% ( 1) 00:16:41.538 4.131 - 4.160: 99.4791% ( 1) 00:16:41.538 4.160 - 4.189: 99.4847% ( 1) 00:16:41.538 4.305 - 4.335: 99.4903% ( 1) 00:16:41.538 4.364 - 4.393: 99.4959% ( 1) 00:16:41.538 4.538 - 4.567: 99.5015% ( 1) 00:16:41.538 5.149 - 5.178: 99.5071% ( 1) 00:16:41.538 5.236 - 5.265: 99.5183% ( 2) 00:16:41.538 5.556 - 5.585: 99.5239% ( 1) 00:16:41.538 5.673 - 5.702: 99.5295% ( 1) 00:16:41.538 5.702 - 5.731: 99.5351% ( 1) 00:16:41.538 11.927 - 11.985: 99.5407% ( 1) 00:16:41.538 12.276 - 12.335: 99.5463% ( 1) 00:16:41.538 12.916 - 12.975: 99.5519% ( 1) 00:16:41.538 14.255 - 14.313: 99.5575% ( 1) 00:16:41.538 15.011 - 15.127: 99.5631% ( 1) 00:16:41.538 15.360 - 15.476: 99.5687% ( 1) 00:16:41.538 17.338 - 17.455: 99.5743% ( 1) 00:16:41.538 37.004 - 37.236: 99.5799% ( 1) 00:16:41.538 3991.738 - 4021.527: 99.9888% ( 73) 00:16:41.538 4974.778 - 5004.567: 99.9944% ( 1) 00:16:41.538 5987.607 - 6017.3[2024-11-20 12:30:47.190733] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:41.538 96: 100.0000% ( 1) 00:16:41.538 00:16:41.538 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:41.538 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:41.538 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:41.538 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:41.538 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:41.798 [ 00:16:41.798 { 00:16:41.798 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:41.798 "subtype": "Discovery", 00:16:41.798 "listen_addresses": [], 00:16:41.798 "allow_any_host": true, 00:16:41.798 "hosts": [] 00:16:41.798 }, 00:16:41.798 { 00:16:41.798 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:41.798 "subtype": "NVMe", 00:16:41.798 "listen_addresses": [ 00:16:41.798 { 00:16:41.798 "trtype": "VFIOUSER", 00:16:41.798 "adrfam": "IPv4", 00:16:41.798 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:41.798 "trsvcid": "0" 00:16:41.798 } 00:16:41.798 ], 00:16:41.798 "allow_any_host": true, 00:16:41.798 "hosts": [], 00:16:41.798 "serial_number": "SPDK1", 00:16:41.798 "model_number": "SPDK bdev Controller", 00:16:41.798 "max_namespaces": 32, 00:16:41.798 "min_cntlid": 1, 00:16:41.798 "max_cntlid": 65519, 00:16:41.798 "namespaces": [ 00:16:41.798 { 00:16:41.798 "nsid": 1, 00:16:41.798 "bdev_name": "Malloc1", 00:16:41.798 "name": "Malloc1", 00:16:41.798 "nguid": "8F742AB443CD46758A41146594EFBB42", 00:16:41.798 "uuid": "8f742ab4-43cd-4675-8a41-146594efbb42" 00:16:41.798 } 00:16:41.798 ] 00:16:41.798 }, 00:16:41.798 { 00:16:41.798 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:41.798 "subtype": "NVMe", 00:16:41.798 "listen_addresses": [ 00:16:41.798 { 00:16:41.798 "trtype": "VFIOUSER", 00:16:41.798 "adrfam": "IPv4", 00:16:41.798 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:41.798 "trsvcid": "0" 00:16:41.798 } 00:16:41.798 ], 00:16:41.798 "allow_any_host": true, 00:16:41.798 "hosts": [], 00:16:41.798 "serial_number": "SPDK2", 00:16:41.798 "model_number": "SPDK bdev Controller", 00:16:41.798 "max_namespaces": 32, 00:16:41.798 "min_cntlid": 1, 00:16:41.798 "max_cntlid": 65519, 00:16:41.798 "namespaces": [ 00:16:41.798 { 00:16:41.798 "nsid": 1, 00:16:41.798 "bdev_name": "Malloc2", 00:16:41.798 "name": "Malloc2", 00:16:41.798 "nguid": "534AAA7447CA467D8C187D8B1D22A711", 00:16:41.798 "uuid": "534aaa74-47ca-467d-8c18-7d8b1d22a711" 00:16:41.798 } 00:16:41.798 ] 00:16:41.798 } 00:16:41.798 ] 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=896577 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:41.798 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:41.798 [2024-11-20 12:30:47.546114] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:42.057 Malloc3 00:16:42.057 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:42.057 [2024-11-20 12:30:47.787848] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:42.057 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:42.317 Asynchronous Event Request test 00:16:42.317 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.317 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.317 Registering asynchronous event callbacks... 00:16:42.317 Starting namespace attribute notice tests for all controllers... 00:16:42.317 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:42.317 aer_cb - Changed Namespace 00:16:42.317 Cleaning up... 00:16:42.317 [ 00:16:42.317 { 00:16:42.317 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:42.317 "subtype": "Discovery", 00:16:42.317 "listen_addresses": [], 00:16:42.317 "allow_any_host": true, 00:16:42.317 "hosts": [] 00:16:42.317 }, 00:16:42.317 { 00:16:42.317 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:42.317 "subtype": "NVMe", 00:16:42.317 "listen_addresses": [ 00:16:42.317 { 00:16:42.317 "trtype": "VFIOUSER", 00:16:42.317 "adrfam": "IPv4", 00:16:42.317 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:42.317 "trsvcid": "0" 00:16:42.317 } 00:16:42.317 ], 00:16:42.317 "allow_any_host": true, 00:16:42.317 "hosts": [], 00:16:42.317 "serial_number": "SPDK1", 00:16:42.317 "model_number": "SPDK bdev Controller", 00:16:42.317 "max_namespaces": 32, 00:16:42.317 "min_cntlid": 1, 00:16:42.317 "max_cntlid": 65519, 00:16:42.317 "namespaces": [ 00:16:42.317 { 00:16:42.317 "nsid": 1, 00:16:42.317 "bdev_name": "Malloc1", 00:16:42.317 "name": "Malloc1", 00:16:42.317 "nguid": "8F742AB443CD46758A41146594EFBB42", 00:16:42.317 "uuid": "8f742ab4-43cd-4675-8a41-146594efbb42" 00:16:42.317 }, 00:16:42.317 { 00:16:42.317 "nsid": 2, 00:16:42.317 "bdev_name": "Malloc3", 00:16:42.317 "name": "Malloc3", 00:16:42.317 "nguid": "CCA52C97957A4FEF8526D1C51BB09E14", 00:16:42.317 "uuid": "cca52c97-957a-4fef-8526-d1c51bb09e14" 00:16:42.317 } 00:16:42.317 ] 00:16:42.317 }, 00:16:42.317 { 00:16:42.317 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:42.317 "subtype": "NVMe", 00:16:42.317 "listen_addresses": [ 00:16:42.317 { 00:16:42.317 "trtype": "VFIOUSER", 00:16:42.317 "adrfam": "IPv4", 00:16:42.317 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:42.317 "trsvcid": "0" 00:16:42.317 } 00:16:42.317 ], 00:16:42.317 "allow_any_host": true, 00:16:42.317 "hosts": [], 00:16:42.317 "serial_number": "SPDK2", 00:16:42.317 "model_number": "SPDK bdev Controller", 00:16:42.317 "max_namespaces": 32, 00:16:42.317 "min_cntlid": 1, 00:16:42.317 "max_cntlid": 65519, 00:16:42.317 "namespaces": [ 00:16:42.317 { 00:16:42.317 "nsid": 1, 00:16:42.317 "bdev_name": "Malloc2", 00:16:42.317 "name": "Malloc2", 00:16:42.317 "nguid": "534AAA7447CA467D8C187D8B1D22A711", 00:16:42.317 "uuid": "534aaa74-47ca-467d-8c18-7d8b1d22a711" 00:16:42.317 } 00:16:42.317 ] 00:16:42.317 } 00:16:42.317 ] 00:16:42.317 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 896577 00:16:42.317 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:42.317 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:42.317 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:42.317 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:42.317 [2024-11-20 12:30:48.025439] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:16:42.317 [2024-11-20 12:30:48.025471] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896600 ] 00:16:42.317 [2024-11-20 12:30:48.060720] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:42.317 [2024-11-20 12:30:48.069646] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:42.317 [2024-11-20 12:30:48.069671] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f76e30f3000 00:16:42.317 [2024-11-20 12:30:48.070645] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.071654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.072663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.073670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.074676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.075683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.076685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:42.317 [2024-11-20 12:30:48.077697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:42.578 [2024-11-20 12:30:48.078703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:42.578 [2024-11-20 12:30:48.078713] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f76e30e8000 00:16:42.578 [2024-11-20 12:30:48.079556] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:42.578 [2024-11-20 12:30:48.088485] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:42.578 [2024-11-20 12:30:48.088516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:42.578 [2024-11-20 12:30:48.093578] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:42.578 [2024-11-20 12:30:48.093616] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:42.578 [2024-11-20 12:30:48.093676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:42.578 [2024-11-20 12:30:48.093689] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:42.578 [2024-11-20 12:30:48.093694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:42.578 [2024-11-20 12:30:48.094579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:42.578 [2024-11-20 12:30:48.094588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:42.578 [2024-11-20 12:30:48.094595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:42.578 [2024-11-20 12:30:48.095582] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:42.578 [2024-11-20 12:30:48.095590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:42.578 [2024-11-20 12:30:48.095597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:42.578 [2024-11-20 12:30:48.096591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:42.578 [2024-11-20 12:30:48.096599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:42.578 [2024-11-20 12:30:48.097599] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:42.578 [2024-11-20 12:30:48.097607] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:42.578 [2024-11-20 12:30:48.097612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:42.578 [2024-11-20 12:30:48.097617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:42.578 [2024-11-20 12:30:48.097727] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:42.578 [2024-11-20 12:30:48.097731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:42.578 [2024-11-20 12:30:48.097736] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:42.578 [2024-11-20 12:30:48.098612] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:42.578 [2024-11-20 12:30:48.099612] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:42.579 [2024-11-20 12:30:48.100619] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:42.579 [2024-11-20 12:30:48.101620] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:42.579 [2024-11-20 12:30:48.101655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:42.579 [2024-11-20 12:30:48.102629] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:42.579 [2024-11-20 12:30:48.102637] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:42.579 [2024-11-20 12:30:48.102641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.102656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:42.579 [2024-11-20 12:30:48.102662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.102673] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.579 [2024-11-20 12:30:48.102676] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.579 [2024-11-20 12:30:48.102680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.579 [2024-11-20 12:30:48.102690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.111420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.111431] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:42.579 [2024-11-20 12:30:48.111435] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:42.579 [2024-11-20 12:30:48.111439] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:42.579 [2024-11-20 12:30:48.111443] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:42.579 [2024-11-20 12:30:48.111449] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:42.579 [2024-11-20 12:30:48.111453] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:42.579 [2024-11-20 12:30:48.111457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.111465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.111475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.119418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.119430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.579 [2024-11-20 12:30:48.119437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.579 [2024-11-20 12:30:48.119444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.579 [2024-11-20 12:30:48.119450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.579 [2024-11-20 12:30:48.119454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.119460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.119467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.127417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.127426] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:42.579 [2024-11-20 12:30:48.127431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.127436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.127441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.127448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.135417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.135471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.135479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.135485] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:42.579 [2024-11-20 12:30:48.135489] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:42.579 [2024-11-20 12:30:48.135491] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.579 [2024-11-20 12:30:48.135497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.143418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.143428] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:42.579 [2024-11-20 12:30:48.143436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.143445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.143451] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.579 [2024-11-20 12:30:48.143454] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.579 [2024-11-20 12:30:48.143457] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.579 [2024-11-20 12:30:48.143462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.151417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.151431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.151438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.151444] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:42.579 [2024-11-20 12:30:48.151448] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.579 [2024-11-20 12:30:48.151450] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.579 [2024-11-20 12:30:48.151456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.159418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.159428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159459] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:42.579 [2024-11-20 12:30:48.159463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:42.579 [2024-11-20 12:30:48.159467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:42.579 [2024-11-20 12:30:48.159482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.167417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.167430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.175417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.175428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.183418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.183430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:42.579 [2024-11-20 12:30:48.191419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:42.579 [2024-11-20 12:30:48.191434] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:42.579 [2024-11-20 12:30:48.191438] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:42.579 [2024-11-20 12:30:48.191440] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:42.579 [2024-11-20 12:30:48.191443] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:42.580 [2024-11-20 12:30:48.191446] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:42.580 [2024-11-20 12:30:48.191451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:42.580 [2024-11-20 12:30:48.191457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:42.580 [2024-11-20 12:30:48.191461] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:42.580 [2024-11-20 12:30:48.191463] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.580 [2024-11-20 12:30:48.191468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:42.580 [2024-11-20 12:30:48.191474] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:42.580 [2024-11-20 12:30:48.191477] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:42.580 [2024-11-20 12:30:48.191480] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.580 [2024-11-20 12:30:48.191485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:42.580 [2024-11-20 12:30:48.191491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:42.580 [2024-11-20 12:30:48.191494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:42.580 [2024-11-20 12:30:48.191497] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:42.580 [2024-11-20 12:30:48.191502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:42.580 [2024-11-20 12:30:48.199418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:42.580 [2024-11-20 12:30:48.199431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:42.580 [2024-11-20 12:30:48.199439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:42.580 [2024-11-20 12:30:48.199445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:42.580 ===================================================== 00:16:42.580 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:42.580 ===================================================== 00:16:42.580 Controller Capabilities/Features 00:16:42.580 ================================ 00:16:42.580 Vendor ID: 4e58 00:16:42.580 Subsystem Vendor ID: 4e58 00:16:42.580 Serial Number: SPDK2 00:16:42.580 Model Number: SPDK bdev Controller 00:16:42.580 Firmware Version: 25.01 00:16:42.580 Recommended Arb Burst: 6 00:16:42.580 IEEE OUI Identifier: 8d 6b 50 00:16:42.580 Multi-path I/O 00:16:42.580 May have multiple subsystem ports: Yes 00:16:42.580 May have multiple controllers: Yes 00:16:42.580 Associated with SR-IOV VF: No 00:16:42.580 Max Data Transfer Size: 131072 00:16:42.580 Max Number of Namespaces: 32 00:16:42.580 Max Number of I/O Queues: 127 00:16:42.580 NVMe Specification Version (VS): 1.3 00:16:42.580 NVMe Specification Version (Identify): 1.3 00:16:42.580 Maximum Queue Entries: 256 00:16:42.580 Contiguous Queues Required: Yes 00:16:42.580 Arbitration Mechanisms Supported 00:16:42.580 Weighted Round Robin: Not Supported 00:16:42.580 Vendor Specific: Not Supported 00:16:42.580 Reset Timeout: 15000 ms 00:16:42.580 Doorbell Stride: 4 bytes 00:16:42.580 NVM Subsystem Reset: Not Supported 00:16:42.580 Command Sets Supported 00:16:42.580 NVM Command Set: Supported 00:16:42.580 Boot Partition: Not Supported 00:16:42.580 Memory Page Size Minimum: 4096 bytes 00:16:42.580 Memory Page Size Maximum: 4096 bytes 00:16:42.580 Persistent Memory Region: Not Supported 00:16:42.580 Optional Asynchronous Events Supported 00:16:42.580 Namespace Attribute Notices: Supported 00:16:42.580 Firmware Activation Notices: Not Supported 00:16:42.580 ANA Change Notices: Not Supported 00:16:42.580 PLE Aggregate Log Change Notices: Not Supported 00:16:42.580 LBA Status Info Alert Notices: Not Supported 00:16:42.580 EGE Aggregate Log Change Notices: Not Supported 00:16:42.580 Normal NVM Subsystem Shutdown event: Not Supported 00:16:42.580 Zone Descriptor Change Notices: Not Supported 00:16:42.580 Discovery Log Change Notices: Not Supported 00:16:42.580 Controller Attributes 00:16:42.580 128-bit Host Identifier: Supported 00:16:42.580 Non-Operational Permissive Mode: Not Supported 00:16:42.580 NVM Sets: Not Supported 00:16:42.580 Read Recovery Levels: Not Supported 00:16:42.580 Endurance Groups: Not Supported 00:16:42.580 Predictable Latency Mode: Not Supported 00:16:42.580 Traffic Based Keep ALive: Not Supported 00:16:42.580 Namespace Granularity: Not Supported 00:16:42.580 SQ Associations: Not Supported 00:16:42.580 UUID List: Not Supported 00:16:42.580 Multi-Domain Subsystem: Not Supported 00:16:42.580 Fixed Capacity Management: Not Supported 00:16:42.580 Variable Capacity Management: Not Supported 00:16:42.580 Delete Endurance Group: Not Supported 00:16:42.580 Delete NVM Set: Not Supported 00:16:42.580 Extended LBA Formats Supported: Not Supported 00:16:42.580 Flexible Data Placement Supported: Not Supported 00:16:42.580 00:16:42.580 Controller Memory Buffer Support 00:16:42.580 ================================ 00:16:42.580 Supported: No 00:16:42.580 00:16:42.580 Persistent Memory Region Support 00:16:42.580 ================================ 00:16:42.580 Supported: No 00:16:42.580 00:16:42.580 Admin Command Set Attributes 00:16:42.580 ============================ 00:16:42.580 Security Send/Receive: Not Supported 00:16:42.580 Format NVM: Not Supported 00:16:42.580 Firmware Activate/Download: Not Supported 00:16:42.580 Namespace Management: Not Supported 00:16:42.580 Device Self-Test: Not Supported 00:16:42.580 Directives: Not Supported 00:16:42.580 NVMe-MI: Not Supported 00:16:42.580 Virtualization Management: Not Supported 00:16:42.580 Doorbell Buffer Config: Not Supported 00:16:42.580 Get LBA Status Capability: Not Supported 00:16:42.580 Command & Feature Lockdown Capability: Not Supported 00:16:42.580 Abort Command Limit: 4 00:16:42.580 Async Event Request Limit: 4 00:16:42.580 Number of Firmware Slots: N/A 00:16:42.580 Firmware Slot 1 Read-Only: N/A 00:16:42.580 Firmware Activation Without Reset: N/A 00:16:42.580 Multiple Update Detection Support: N/A 00:16:42.580 Firmware Update Granularity: No Information Provided 00:16:42.580 Per-Namespace SMART Log: No 00:16:42.580 Asymmetric Namespace Access Log Page: Not Supported 00:16:42.580 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:42.580 Command Effects Log Page: Supported 00:16:42.580 Get Log Page Extended Data: Supported 00:16:42.580 Telemetry Log Pages: Not Supported 00:16:42.580 Persistent Event Log Pages: Not Supported 00:16:42.580 Supported Log Pages Log Page: May Support 00:16:42.580 Commands Supported & Effects Log Page: Not Supported 00:16:42.580 Feature Identifiers & Effects Log Page:May Support 00:16:42.580 NVMe-MI Commands & Effects Log Page: May Support 00:16:42.580 Data Area 4 for Telemetry Log: Not Supported 00:16:42.580 Error Log Page Entries Supported: 128 00:16:42.580 Keep Alive: Supported 00:16:42.580 Keep Alive Granularity: 10000 ms 00:16:42.580 00:16:42.580 NVM Command Set Attributes 00:16:42.580 ========================== 00:16:42.580 Submission Queue Entry Size 00:16:42.580 Max: 64 00:16:42.580 Min: 64 00:16:42.580 Completion Queue Entry Size 00:16:42.580 Max: 16 00:16:42.580 Min: 16 00:16:42.580 Number of Namespaces: 32 00:16:42.580 Compare Command: Supported 00:16:42.580 Write Uncorrectable Command: Not Supported 00:16:42.580 Dataset Management Command: Supported 00:16:42.580 Write Zeroes Command: Supported 00:16:42.580 Set Features Save Field: Not Supported 00:16:42.580 Reservations: Not Supported 00:16:42.580 Timestamp: Not Supported 00:16:42.580 Copy: Supported 00:16:42.580 Volatile Write Cache: Present 00:16:42.580 Atomic Write Unit (Normal): 1 00:16:42.580 Atomic Write Unit (PFail): 1 00:16:42.580 Atomic Compare & Write Unit: 1 00:16:42.580 Fused Compare & Write: Supported 00:16:42.580 Scatter-Gather List 00:16:42.580 SGL Command Set: Supported (Dword aligned) 00:16:42.580 SGL Keyed: Not Supported 00:16:42.580 SGL Bit Bucket Descriptor: Not Supported 00:16:42.580 SGL Metadata Pointer: Not Supported 00:16:42.580 Oversized SGL: Not Supported 00:16:42.580 SGL Metadata Address: Not Supported 00:16:42.580 SGL Offset: Not Supported 00:16:42.580 Transport SGL Data Block: Not Supported 00:16:42.580 Replay Protected Memory Block: Not Supported 00:16:42.580 00:16:42.580 Firmware Slot Information 00:16:42.580 ========================= 00:16:42.580 Active slot: 1 00:16:42.580 Slot 1 Firmware Revision: 25.01 00:16:42.580 00:16:42.580 00:16:42.580 Commands Supported and Effects 00:16:42.580 ============================== 00:16:42.580 Admin Commands 00:16:42.580 -------------- 00:16:42.580 Get Log Page (02h): Supported 00:16:42.580 Identify (06h): Supported 00:16:42.580 Abort (08h): Supported 00:16:42.580 Set Features (09h): Supported 00:16:42.580 Get Features (0Ah): Supported 00:16:42.580 Asynchronous Event Request (0Ch): Supported 00:16:42.581 Keep Alive (18h): Supported 00:16:42.581 I/O Commands 00:16:42.581 ------------ 00:16:42.581 Flush (00h): Supported LBA-Change 00:16:42.581 Write (01h): Supported LBA-Change 00:16:42.581 Read (02h): Supported 00:16:42.581 Compare (05h): Supported 00:16:42.581 Write Zeroes (08h): Supported LBA-Change 00:16:42.581 Dataset Management (09h): Supported LBA-Change 00:16:42.581 Copy (19h): Supported LBA-Change 00:16:42.581 00:16:42.581 Error Log 00:16:42.581 ========= 00:16:42.581 00:16:42.581 Arbitration 00:16:42.581 =========== 00:16:42.581 Arbitration Burst: 1 00:16:42.581 00:16:42.581 Power Management 00:16:42.581 ================ 00:16:42.581 Number of Power States: 1 00:16:42.581 Current Power State: Power State #0 00:16:42.581 Power State #0: 00:16:42.581 Max Power: 0.00 W 00:16:42.581 Non-Operational State: Operational 00:16:42.581 Entry Latency: Not Reported 00:16:42.581 Exit Latency: Not Reported 00:16:42.581 Relative Read Throughput: 0 00:16:42.581 Relative Read Latency: 0 00:16:42.581 Relative Write Throughput: 0 00:16:42.581 Relative Write Latency: 0 00:16:42.581 Idle Power: Not Reported 00:16:42.581 Active Power: Not Reported 00:16:42.581 Non-Operational Permissive Mode: Not Supported 00:16:42.581 00:16:42.581 Health Information 00:16:42.581 ================== 00:16:42.581 Critical Warnings: 00:16:42.581 Available Spare Space: OK 00:16:42.581 Temperature: OK 00:16:42.581 Device Reliability: OK 00:16:42.581 Read Only: No 00:16:42.581 Volatile Memory Backup: OK 00:16:42.581 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:42.581 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:42.581 Available Spare: 0% 00:16:42.581 Available Sp[2024-11-20 12:30:48.199526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:42.581 [2024-11-20 12:30:48.207419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:42.581 [2024-11-20 12:30:48.207448] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:42.581 [2024-11-20 12:30:48.207456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.581 [2024-11-20 12:30:48.207461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.581 [2024-11-20 12:30:48.207466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.581 [2024-11-20 12:30:48.207471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.581 [2024-11-20 12:30:48.207507] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:42.581 [2024-11-20 12:30:48.207516] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:42.581 [2024-11-20 12:30:48.208511] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:42.581 [2024-11-20 12:30:48.208553] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:42.581 [2024-11-20 12:30:48.208559] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:42.581 [2024-11-20 12:30:48.209520] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:42.581 [2024-11-20 12:30:48.209530] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:42.581 [2024-11-20 12:30:48.209575] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:42.581 [2024-11-20 12:30:48.210541] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:42.581 are Threshold: 0% 00:16:42.581 Life Percentage Used: 0% 00:16:42.581 Data Units Read: 0 00:16:42.581 Data Units Written: 0 00:16:42.581 Host Read Commands: 0 00:16:42.581 Host Write Commands: 0 00:16:42.581 Controller Busy Time: 0 minutes 00:16:42.581 Power Cycles: 0 00:16:42.581 Power On Hours: 0 hours 00:16:42.581 Unsafe Shutdowns: 0 00:16:42.581 Unrecoverable Media Errors: 0 00:16:42.581 Lifetime Error Log Entries: 0 00:16:42.581 Warning Temperature Time: 0 minutes 00:16:42.581 Critical Temperature Time: 0 minutes 00:16:42.581 00:16:42.581 Number of Queues 00:16:42.581 ================ 00:16:42.581 Number of I/O Submission Queues: 127 00:16:42.581 Number of I/O Completion Queues: 127 00:16:42.581 00:16:42.581 Active Namespaces 00:16:42.581 ================= 00:16:42.581 Namespace ID:1 00:16:42.581 Error Recovery Timeout: Unlimited 00:16:42.581 Command Set Identifier: NVM (00h) 00:16:42.581 Deallocate: Supported 00:16:42.581 Deallocated/Unwritten Error: Not Supported 00:16:42.581 Deallocated Read Value: Unknown 00:16:42.581 Deallocate in Write Zeroes: Not Supported 00:16:42.581 Deallocated Guard Field: 0xFFFF 00:16:42.581 Flush: Supported 00:16:42.581 Reservation: Supported 00:16:42.581 Namespace Sharing Capabilities: Multiple Controllers 00:16:42.581 Size (in LBAs): 131072 (0GiB) 00:16:42.581 Capacity (in LBAs): 131072 (0GiB) 00:16:42.581 Utilization (in LBAs): 131072 (0GiB) 00:16:42.581 NGUID: 534AAA7447CA467D8C187D8B1D22A711 00:16:42.581 UUID: 534aaa74-47ca-467d-8c18-7d8b1d22a711 00:16:42.581 Thin Provisioning: Not Supported 00:16:42.581 Per-NS Atomic Units: Yes 00:16:42.581 Atomic Boundary Size (Normal): 0 00:16:42.581 Atomic Boundary Size (PFail): 0 00:16:42.581 Atomic Boundary Offset: 0 00:16:42.581 Maximum Single Source Range Length: 65535 00:16:42.581 Maximum Copy Length: 65535 00:16:42.581 Maximum Source Range Count: 1 00:16:42.581 NGUID/EUI64 Never Reused: No 00:16:42.581 Namespace Write Protected: No 00:16:42.581 Number of LBA Formats: 1 00:16:42.581 Current LBA Format: LBA Format #00 00:16:42.581 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:42.581 00:16:42.581 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:42.840 [2024-11-20 12:30:48.426078] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:48.115 Initializing NVMe Controllers 00:16:48.115 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:48.115 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:48.116 Initialization complete. Launching workers. 00:16:48.116 ======================================================== 00:16:48.116 Latency(us) 00:16:48.116 Device Information : IOPS MiB/s Average min max 00:16:48.116 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39994.91 156.23 3200.22 849.47 9750.03 00:16:48.116 ======================================================== 00:16:48.116 Total : 39994.91 156.23 3200.22 849.47 9750.03 00:16:48.116 00:16:48.116 [2024-11-20 12:30:53.531688] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:48.116 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:48.116 [2024-11-20 12:30:53.754326] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:53.415 Initializing NVMe Controllers 00:16:53.415 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:53.415 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:53.415 Initialization complete. Launching workers. 00:16:53.415 ======================================================== 00:16:53.415 Latency(us) 00:16:53.415 Device Information : IOPS MiB/s Average min max 00:16:53.415 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39998.00 156.24 3200.32 869.80 10733.08 00:16:53.415 ======================================================== 00:16:53.415 Total : 39998.00 156.24 3200.32 869.80 10733.08 00:16:53.415 00:16:53.415 [2024-11-20 12:30:58.778977] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:53.415 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:53.415 [2024-11-20 12:30:58.974604] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:58.686 [2024-11-20 12:31:04.111508] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:58.686 Initializing NVMe Controllers 00:16:58.686 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:58.686 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:58.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:58.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:58.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:58.686 Initialization complete. Launching workers. 00:16:58.686 Starting thread on core 2 00:16:58.686 Starting thread on core 3 00:16:58.686 Starting thread on core 1 00:16:58.686 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:58.686 [2024-11-20 12:31:04.392849] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:01.974 [2024-11-20 12:31:07.606656] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:01.974 Initializing NVMe Controllers 00:17:01.974 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:01.974 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:01.974 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:01.974 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:01.974 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:01.974 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:01.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:01.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:01.974 Initialization complete. Launching workers. 00:17:01.974 Starting thread on core 1 with urgent priority queue 00:17:01.974 Starting thread on core 2 with urgent priority queue 00:17:01.974 Starting thread on core 3 with urgent priority queue 00:17:01.974 Starting thread on core 0 with urgent priority queue 00:17:01.974 SPDK bdev Controller (SPDK2 ) core 0: 1468.33 IO/s 68.10 secs/100000 ios 00:17:01.974 SPDK bdev Controller (SPDK2 ) core 1: 1913.33 IO/s 52.26 secs/100000 ios 00:17:01.974 SPDK bdev Controller (SPDK2 ) core 2: 1477.67 IO/s 67.67 secs/100000 ios 00:17:01.974 SPDK bdev Controller (SPDK2 ) core 3: 2189.33 IO/s 45.68 secs/100000 ios 00:17:01.974 ======================================================== 00:17:01.974 00:17:01.974 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:02.233 [2024-11-20 12:31:07.872636] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:02.233 Initializing NVMe Controllers 00:17:02.233 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.233 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.233 Namespace ID: 1 size: 0GB 00:17:02.233 Initialization complete. 00:17:02.233 INFO: using host memory buffer for IO 00:17:02.233 Hello world! 00:17:02.233 [2024-11-20 12:31:07.884711] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:02.233 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:02.492 [2024-11-20 12:31:08.143635] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.872 Initializing NVMe Controllers 00:17:03.872 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.872 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.872 Initialization complete. Launching workers. 00:17:03.872 submit (in ns) avg, min, max = 4545.1, 2924.5, 3998444.5 00:17:03.872 complete (in ns) avg, min, max = 21653.7, 1578.2, 3999338.2 00:17:03.872 00:17:03.872 Submit histogram 00:17:03.872 ================ 00:17:03.872 Range in us Cumulative Count 00:17:03.872 2.924 - 2.938: 0.0169% ( 3) 00:17:03.872 2.938 - 2.953: 0.0562% ( 7) 00:17:03.872 2.953 - 2.967: 0.0899% ( 6) 00:17:03.872 2.967 - 2.982: 0.3539% ( 47) 00:17:03.872 2.982 - 2.996: 1.1348% ( 139) 00:17:03.872 2.996 - 3.011: 2.8933% ( 313) 00:17:03.872 3.011 - 3.025: 5.4888% ( 462) 00:17:03.872 3.025 - 3.040: 9.4831% ( 711) 00:17:03.872 3.040 - 3.055: 14.4663% ( 887) 00:17:03.872 3.055 - 3.069: 19.8146% ( 952) 00:17:03.872 3.069 - 3.084: 25.0843% ( 938) 00:17:03.872 3.084 - 3.098: 30.2921% ( 927) 00:17:03.872 3.098 - 3.113: 34.2416% ( 703) 00:17:03.872 3.113 - 3.127: 36.9270% ( 478) 00:17:03.872 3.127 - 3.142: 39.5843% ( 473) 00:17:03.872 3.142 - 3.156: 42.4719% ( 514) 00:17:03.872 3.156 - 3.171: 45.5000% ( 539) 00:17:03.872 3.171 - 3.185: 50.2135% ( 839) 00:17:03.872 3.185 - 3.200: 54.4663% ( 757) 00:17:03.872 3.200 - 3.215: 59.0899% ( 823) 00:17:03.872 3.215 - 3.229: 65.8034% ( 1195) 00:17:03.872 3.229 - 3.244: 71.8933% ( 1084) 00:17:03.872 3.244 - 3.258: 76.8820% ( 888) 00:17:03.872 3.258 - 3.273: 80.5225% ( 648) 00:17:03.872 3.273 - 3.287: 83.6124% ( 550) 00:17:03.872 3.287 - 3.302: 86.0281% ( 430) 00:17:03.872 3.302 - 3.316: 87.4831% ( 259) 00:17:03.872 3.316 - 3.331: 88.3090% ( 147) 00:17:03.872 3.331 - 3.345: 88.8483% ( 96) 00:17:03.872 3.345 - 3.360: 89.3933% ( 97) 00:17:03.872 3.360 - 3.375: 90.0281% ( 113) 00:17:03.872 3.375 - 3.389: 90.8315% ( 143) 00:17:03.872 3.389 - 3.404: 91.6461% ( 145) 00:17:03.872 3.404 - 3.418: 92.2753% ( 112) 00:17:03.873 3.418 - 3.433: 92.8371% ( 100) 00:17:03.873 3.433 - 3.447: 93.4270% ( 105) 00:17:03.873 3.447 - 3.462: 94.0674% ( 114) 00:17:03.873 3.462 - 3.476: 94.6742% ( 108) 00:17:03.873 3.476 - 3.491: 95.4888% ( 145) 00:17:03.873 3.491 - 3.505: 96.3989% ( 162) 00:17:03.873 3.505 - 3.520: 97.0674% ( 119) 00:17:03.873 3.520 - 3.535: 97.7809% ( 127) 00:17:03.873 3.535 - 3.549: 98.3596% ( 103) 00:17:03.873 3.549 - 3.564: 98.7247% ( 65) 00:17:03.873 3.564 - 3.578: 99.0337% ( 55) 00:17:03.873 3.578 - 3.593: 99.1966% ( 29) 00:17:03.873 3.593 - 3.607: 99.3820% ( 33) 00:17:03.873 3.607 - 3.622: 99.4719% ( 16) 00:17:03.873 3.622 - 3.636: 99.5449% ( 13) 00:17:03.873 3.636 - 3.651: 99.5843% ( 7) 00:17:03.873 3.651 - 3.665: 99.6292% ( 8) 00:17:03.873 3.665 - 3.680: 99.6404% ( 2) 00:17:03.873 3.680 - 3.695: 99.6573% ( 3) 00:17:03.873 3.695 - 3.709: 99.6629% ( 1) 00:17:03.873 3.724 - 3.753: 99.6742% ( 2) 00:17:03.873 3.753 - 3.782: 99.6798% ( 1) 00:17:03.873 3.811 - 3.840: 99.6854% ( 1) 00:17:03.873 3.840 - 3.869: 99.6910% ( 1) 00:17:03.873 3.898 - 3.927: 99.7022% ( 2) 00:17:03.873 3.985 - 4.015: 99.7079% ( 1) 00:17:03.873 4.364 - 4.393: 99.7135% ( 1) 00:17:03.873 4.975 - 5.004: 99.7191% ( 1) 00:17:03.873 5.236 - 5.265: 99.7247% ( 1) 00:17:03.873 5.295 - 5.324: 99.7303% ( 1) 00:17:03.873 5.324 - 5.353: 99.7360% ( 1) 00:17:03.873 5.353 - 5.382: 99.7416% ( 1) 00:17:03.873 5.556 - 5.585: 99.7472% ( 1) 00:17:03.873 5.585 - 5.615: 99.7584% ( 2) 00:17:03.873 5.615 - 5.644: 99.7640% ( 1) 00:17:03.873 5.644 - 5.673: 99.7697% ( 1) 00:17:03.873 5.964 - 5.993: 99.7753% ( 1) 00:17:03.873 6.022 - 6.051: 99.7809% ( 1) 00:17:03.873 6.080 - 6.109: 99.7921% ( 2) 00:17:03.873 6.138 - 6.167: 99.7978% ( 1) 00:17:03.873 6.167 - 6.196: 99.8034% ( 1) 00:17:03.873 6.196 - 6.225: 99.8090% ( 1) 00:17:03.873 6.255 - 6.284: 99.8146% ( 1) 00:17:03.873 6.313 - 6.342: 99.8202% ( 1) 00:17:03.873 6.371 - 6.400: 99.8371% ( 3) 00:17:03.873 6.429 - 6.458: 99.8427% ( 1) 00:17:03.873 6.487 - 6.516: 99.8483% ( 1) 00:17:03.873 6.516 - 6.545: 99.8596% ( 2) 00:17:03.873 6.545 - 6.575: 99.8652% ( 1) 00:17:03.873 6.575 - 6.604: 99.8708% ( 1) 00:17:03.873 6.633 - 6.662: 99.8820% ( 2) 00:17:03.873 6.662 - 6.691: 99.8876% ( 1) 00:17:03.873 6.720 - 6.749: 99.8989% ( 2) 00:17:03.873 6.749 - 6.778: 99.9045% ( 1) 00:17:03.873 7.011 - 7.040: 99.9101% ( 1) 00:17:03.873 7.069 - 7.098: 99.9157% ( 1) 00:17:03.873 7.098 - 7.127: 99.9213% ( 1) 00:17:03.873 7.215 - 7.244: 99.9270% ( 1) 00:17:03.873 7.564 - 7.622: 99.9326% ( 1) 00:17:03.873 7.738 - 7.796: 99.9382% ( 1) 00:17:03.873 7.855 - 7.913: 99.9438% ( 1) 00:17:03.873 9.949 - 10.007: 99.9494% ( 1) 00:17:03.873 12.509 - 12.567: 99.9551% ( 1) 00:17:03.873 13.498 - 13.556: 99.9607% ( 1) 00:17:03.873 13.673 - 13.731: 99.9663% ( 1) 00:17:03.873 3991.738 - 4021.527: 100.0000% ( 6) 00:17:03.873 00:17:03.873 Complete histogram 00:17:03.873 ================== 00:17:03.873 Range in us Cumulative Count 00:17:03.873 1.578 - 1.585: 0.0225% ( 4) 00:17:03.873 1.585 - 1.593: 0.0281% ( 1) 00:17:03.873 1.593 - 1.600: 0.0506% ( 4) 00:17:03.873 1.600 - 1.607: 0.0730% ( 4) 00:17:03.873 1.607 - 1.615: 0.0843% ( 2) 00:17:03.873 1.615 - 1.622: 0.2303% ( 26) 00:17:03.873 1.622 - 1.629: 0.9551% ( 129) 00:17:03.873 1.629 - 1.636: 2.3708% ( 252) 00:17:03.873 1.636 - 1.644: 3.4944% ( 200) 00:17:03.873 1.644 - 1.651: 4.3371% ( 150) 00:17:03.873 1.651 - 1.658: 5.2079% ( 155) 00:17:03.873 1.658 - 1.665: 11.2247% ( 1071) 00:17:03.873 1.665 - 1.673: 37.5337% ( 4683) 00:17:03.873 1.673 - 1.680: 69.3764% ( 5668) 00:17:03.873 1.680 - 1.687: 84.8989% ( 2763) 00:17:03.873 1.687 - 1.695: 90.3258% ( 966) 00:17:03.873 1.695 - 1.702: 93.7022% ( 601) 00:17:03.873 1.702 - 1.709: 95.6685% ( 350) 00:17:03.873 1.709 - 1.716: 96.4551% ( 140) 00:17:03.873 1.716 - 1.724: 96.8483% ( 70) 00:17:03.873 1.724 - 1.731: 97.0618% ( 38) 00:17:03.873 1.731 - 1.738: 97.3652% ( 54) 00:17:03.873 1.738 - 1.745: 97.6966% ( 59) 00:17:03.873 1.745 - 1.753: 98.1966% ( 89) 00:17:03.873 1.753 - 1.760: 98.6573% ( 82) 00:17:03.873 1.760 - 1.767: 98.9045% ( 44) 00:17:03.873 1.767 - 1.775: 99.0337% ( 23) 00:17:03.873 1.775 - 1.782: 99.1404% ( 19) 00:17:03.873 1.782 - 1.789: 99.1742% ( 6) 00:17:03.873 1.789 - 1.796: 99.2022% ( 5) 00:17:03.873 1.796 - 1.804: 99.2191% ( 3) 00:17:03.873 1.804 - 1.811: 99.2303% ( 2) 00:17:03.873 1.811 - 1.818: 99.2360% ( 1) 00:17:03.873 1.825 - 1.833: 99.2472% ( 2) 00:17:03.873 1.833 - 1.840: 99.2528% ( 1) 00:17:03.873 1.840 - 1.847: 99.2640% ( 2) 00:17:03.873 1.847 - 1.855: 99.2753% ( 2) 00:17:03.873 1.855 - 1.862: 99.2809% ( 1) 00:17:03.873 1.862 - 1.876: 99.2921% ( 2) 00:17:03.873 1.876 - 1.891: 99.2978% ( 1) 00:17:03.873 1.891 - 1.905: 99.3034% ( 1) 00:17:03.873 1.905 - 1.920: 99.3202% ( 3) 00:17:03.873 1.920 - 1.935: 99.3371% ( 3) 00:17:03.873 1.935 - 1.949: 99.3596% ( 4) 00:17:03.873 1.993 - 2.007: 99.3652% ( 1) 00:17:03.873 2.007 - 2.022: 99.3708% ( 1) 00:17:03.873 2.036 - 2.051: 99.3764% ( 1) 00:17:03.873 2.051 - 2.065: 99.3820% ( 1) 00:17:03.873 2.124 - 2.138: 99.3876% ( 1) 00:17:03.873 2.400 - 2.415: 99.3933% ( 1) 00:17:03.873 3.535 - 3.549: 99.3989% ( 1) 00:17:03.873 3.651 - 3.665: 99.4045% ( 1) 00:17:03.873 3.782 - 3.811: 99.4101% ( 1) 00:17:03.873 4.044 - 4.073: 99.4157% ( 1) 00:17:03.873 4.073 - 4.102: 99.4213% ( 1) 00:17:03.874 4.189 - 4.218: 99.4270% ( 1) 00:17:03.874 4.247 - 4.276: 99.4326% ( 1) 00:17:03.874 4.335 - 4.364: 99.4382% ( 1) 00:17:03.874 4.567 - 4.596: 99.4438% ( 1) 00:17:03.874 4.771 - 4.800: 99.4494% ( 1) 00:17:03.874 4.829 - 4.858: 99.4551% ( 1) 00:17:03.874 5.295 - 5.324: 99.4607% ( 1) 00:17:03.874 5.353 - 5.382: 99.4663% ( 1) 00:17:03.874 5.760 - 5.789: 99.4719% ( 1) 00:17:03.874 5.789 - 5.8[2024-11-20 12:31:09.238204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.874 18: 99.4775% ( 1) 00:17:03.874 7.564 - 7.622: 99.4831% ( 1) 00:17:03.874 9.251 - 9.309: 99.4888% ( 1) 00:17:03.874 13.440 - 13.498: 99.4944% ( 1) 00:17:03.874 13.847 - 13.905: 99.5000% ( 1) 00:17:03.874 3991.738 - 4021.527: 100.0000% ( 89) 00:17:03.874 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:03.874 [ 00:17:03.874 { 00:17:03.874 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:03.874 "subtype": "Discovery", 00:17:03.874 "listen_addresses": [], 00:17:03.874 "allow_any_host": true, 00:17:03.874 "hosts": [] 00:17:03.874 }, 00:17:03.874 { 00:17:03.874 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:03.874 "subtype": "NVMe", 00:17:03.874 "listen_addresses": [ 00:17:03.874 { 00:17:03.874 "trtype": "VFIOUSER", 00:17:03.874 "adrfam": "IPv4", 00:17:03.874 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:03.874 "trsvcid": "0" 00:17:03.874 } 00:17:03.874 ], 00:17:03.874 "allow_any_host": true, 00:17:03.874 "hosts": [], 00:17:03.874 "serial_number": "SPDK1", 00:17:03.874 "model_number": "SPDK bdev Controller", 00:17:03.874 "max_namespaces": 32, 00:17:03.874 "min_cntlid": 1, 00:17:03.874 "max_cntlid": 65519, 00:17:03.874 "namespaces": [ 00:17:03.874 { 00:17:03.874 "nsid": 1, 00:17:03.874 "bdev_name": "Malloc1", 00:17:03.874 "name": "Malloc1", 00:17:03.874 "nguid": "8F742AB443CD46758A41146594EFBB42", 00:17:03.874 "uuid": "8f742ab4-43cd-4675-8a41-146594efbb42" 00:17:03.874 }, 00:17:03.874 { 00:17:03.874 "nsid": 2, 00:17:03.874 "bdev_name": "Malloc3", 00:17:03.874 "name": "Malloc3", 00:17:03.874 "nguid": "CCA52C97957A4FEF8526D1C51BB09E14", 00:17:03.874 "uuid": "cca52c97-957a-4fef-8526-d1c51bb09e14" 00:17:03.874 } 00:17:03.874 ] 00:17:03.874 }, 00:17:03.874 { 00:17:03.874 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:03.874 "subtype": "NVMe", 00:17:03.874 "listen_addresses": [ 00:17:03.874 { 00:17:03.874 "trtype": "VFIOUSER", 00:17:03.874 "adrfam": "IPv4", 00:17:03.874 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:03.874 "trsvcid": "0" 00:17:03.874 } 00:17:03.874 ], 00:17:03.874 "allow_any_host": true, 00:17:03.874 "hosts": [], 00:17:03.874 "serial_number": "SPDK2", 00:17:03.874 "model_number": "SPDK bdev Controller", 00:17:03.874 "max_namespaces": 32, 00:17:03.874 "min_cntlid": 1, 00:17:03.874 "max_cntlid": 65519, 00:17:03.874 "namespaces": [ 00:17:03.874 { 00:17:03.874 "nsid": 1, 00:17:03.874 "bdev_name": "Malloc2", 00:17:03.874 "name": "Malloc2", 00:17:03.874 "nguid": "534AAA7447CA467D8C187D8B1D22A711", 00:17:03.874 "uuid": "534aaa74-47ca-467d-8c18-7d8b1d22a711" 00:17:03.874 } 00:17:03.874 ] 00:17:03.874 } 00:17:03.874 ] 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=900522 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:03.874 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:03.874 [2024-11-20 12:31:09.597185] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:04.138 Malloc4 00:17:04.138 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:04.138 [2024-11-20 12:31:09.813738] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:04.138 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:04.138 Asynchronous Event Request test 00:17:04.138 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:04.138 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:04.138 Registering asynchronous event callbacks... 00:17:04.138 Starting namespace attribute notice tests for all controllers... 00:17:04.138 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:04.138 aer_cb - Changed Namespace 00:17:04.138 Cleaning up... 00:17:04.398 [ 00:17:04.398 { 00:17:04.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.398 "subtype": "Discovery", 00:17:04.398 "listen_addresses": [], 00:17:04.398 "allow_any_host": true, 00:17:04.398 "hosts": [] 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:04.398 "subtype": "NVMe", 00:17:04.398 "listen_addresses": [ 00:17:04.398 { 00:17:04.398 "trtype": "VFIOUSER", 00:17:04.398 "adrfam": "IPv4", 00:17:04.398 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:04.398 "trsvcid": "0" 00:17:04.398 } 00:17:04.398 ], 00:17:04.398 "allow_any_host": true, 00:17:04.398 "hosts": [], 00:17:04.398 "serial_number": "SPDK1", 00:17:04.398 "model_number": "SPDK bdev Controller", 00:17:04.398 "max_namespaces": 32, 00:17:04.398 "min_cntlid": 1, 00:17:04.398 "max_cntlid": 65519, 00:17:04.398 "namespaces": [ 00:17:04.398 { 00:17:04.398 "nsid": 1, 00:17:04.398 "bdev_name": "Malloc1", 00:17:04.398 "name": "Malloc1", 00:17:04.398 "nguid": "8F742AB443CD46758A41146594EFBB42", 00:17:04.398 "uuid": "8f742ab4-43cd-4675-8a41-146594efbb42" 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "nsid": 2, 00:17:04.398 "bdev_name": "Malloc3", 00:17:04.398 "name": "Malloc3", 00:17:04.398 "nguid": "CCA52C97957A4FEF8526D1C51BB09E14", 00:17:04.398 "uuid": "cca52c97-957a-4fef-8526-d1c51bb09e14" 00:17:04.398 } 00:17:04.398 ] 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:04.398 "subtype": "NVMe", 00:17:04.398 "listen_addresses": [ 00:17:04.398 { 00:17:04.398 "trtype": "VFIOUSER", 00:17:04.398 "adrfam": "IPv4", 00:17:04.398 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:04.398 "trsvcid": "0" 00:17:04.398 } 00:17:04.398 ], 00:17:04.398 "allow_any_host": true, 00:17:04.398 "hosts": [], 00:17:04.398 "serial_number": "SPDK2", 00:17:04.398 "model_number": "SPDK bdev Controller", 00:17:04.398 "max_namespaces": 32, 00:17:04.398 "min_cntlid": 1, 00:17:04.398 "max_cntlid": 65519, 00:17:04.398 "namespaces": [ 00:17:04.398 { 00:17:04.398 "nsid": 1, 00:17:04.398 "bdev_name": "Malloc2", 00:17:04.398 "name": "Malloc2", 00:17:04.398 "nguid": "534AAA7447CA467D8C187D8B1D22A711", 00:17:04.398 "uuid": "534aaa74-47ca-467d-8c18-7d8b1d22a711" 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "nsid": 2, 00:17:04.398 "bdev_name": "Malloc4", 00:17:04.398 "name": "Malloc4", 00:17:04.398 "nguid": "547A7296479E465DA05621704DA461A5", 00:17:04.398 "uuid": "547a7296-479e-465d-a056-21704da461a5" 00:17:04.398 } 00:17:04.398 ] 00:17:04.398 } 00:17:04.398 ] 00:17:04.398 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 900522 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 892336 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 892336 ']' 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 892336 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 892336 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 892336' 00:17:04.399 killing process with pid 892336 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 892336 00:17:04.399 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 892336 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=900616 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 900616' 00:17:04.658 Process pid: 900616 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 900616 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 900616 ']' 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.658 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.659 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.659 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:04.659 [2024-11-20 12:31:10.357362] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:04.659 [2024-11-20 12:31:10.358193] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:04.659 [2024-11-20 12:31:10.358231] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.918 [2024-11-20 12:31:10.433355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.918 [2024-11-20 12:31:10.473810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.918 [2024-11-20 12:31:10.473844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.918 [2024-11-20 12:31:10.473851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.918 [2024-11-20 12:31:10.473857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.918 [2024-11-20 12:31:10.473861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.918 [2024-11-20 12:31:10.475352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.918 [2024-11-20 12:31:10.475464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.918 [2024-11-20 12:31:10.475508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.918 [2024-11-20 12:31:10.475508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.918 [2024-11-20 12:31:10.541467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:04.918 [2024-11-20 12:31:10.541865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:04.918 [2024-11-20 12:31:10.542339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:04.918 [2024-11-20 12:31:10.542742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:04.918 [2024-11-20 12:31:10.542777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:05.487 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.487 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:05.487 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:06.425 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:06.684 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:06.684 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:06.684 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:06.684 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:06.684 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:06.944 Malloc1 00:17:06.944 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:07.203 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:07.203 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:07.463 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:07.463 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:07.463 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:07.723 Malloc2 00:17:07.723 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:07.983 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:07.983 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 900616 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 900616 ']' 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 900616 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 900616 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 900616' 00:17:08.242 killing process with pid 900616 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 900616 00:17:08.242 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 900616 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:08.502 00:17:08.502 real 0m50.986s 00:17:08.502 user 3m15.117s 00:17:08.502 sys 0m2.948s 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:08.502 ************************************ 00:17:08.502 END TEST nvmf_vfio_user 00:17:08.502 ************************************ 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.502 ************************************ 00:17:08.502 START TEST nvmf_vfio_user_nvme_compliance 00:17:08.502 ************************************ 00:17:08.502 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:08.762 * Looking for test storage... 00:17:08.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:08.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.762 --rc genhtml_branch_coverage=1 00:17:08.762 --rc genhtml_function_coverage=1 00:17:08.762 --rc genhtml_legend=1 00:17:08.762 --rc geninfo_all_blocks=1 00:17:08.762 --rc geninfo_unexecuted_blocks=1 00:17:08.762 00:17:08.762 ' 00:17:08.762 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:08.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.762 --rc genhtml_branch_coverage=1 00:17:08.762 --rc genhtml_function_coverage=1 00:17:08.762 --rc genhtml_legend=1 00:17:08.762 --rc geninfo_all_blocks=1 00:17:08.762 --rc geninfo_unexecuted_blocks=1 00:17:08.762 00:17:08.762 ' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:08.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.763 --rc genhtml_branch_coverage=1 00:17:08.763 --rc genhtml_function_coverage=1 00:17:08.763 --rc genhtml_legend=1 00:17:08.763 --rc geninfo_all_blocks=1 00:17:08.763 --rc geninfo_unexecuted_blocks=1 00:17:08.763 00:17:08.763 ' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:08.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.763 --rc genhtml_branch_coverage=1 00:17:08.763 --rc genhtml_function_coverage=1 00:17:08.763 --rc genhtml_legend=1 00:17:08.763 --rc geninfo_all_blocks=1 00:17:08.763 --rc geninfo_unexecuted_blocks=1 00:17:08.763 00:17:08.763 ' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=901406 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 901406' 00:17:08.763 Process pid: 901406 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 901406 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 901406 ']' 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.763 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.763 [2024-11-20 12:31:14.493534] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:08.763 [2024-11-20 12:31:14.493581] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.022 [2024-11-20 12:31:14.566652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.022 [2024-11-20 12:31:14.603354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.022 [2024-11-20 12:31:14.603390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.022 [2024-11-20 12:31:14.603397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.022 [2024-11-20 12:31:14.603402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.022 [2024-11-20 12:31:14.603407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.022 [2024-11-20 12:31:14.604990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.022 [2024-11-20 12:31:14.605113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.022 [2024-11-20 12:31:14.605115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.591 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.591 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:09.591 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 malloc0 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.970 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:10.970 00:17:10.970 00:17:10.970 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.970 http://cunit.sourceforge.net/ 00:17:10.970 00:17:10.970 00:17:10.970 Suite: nvme_compliance 00:17:10.970 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 12:31:16.528219] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.970 [2024-11-20 12:31:16.529542] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:10.970 [2024-11-20 12:31:16.529556] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:10.970 [2024-11-20 12:31:16.529561] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:10.970 [2024-11-20 12:31:16.531241] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.970 passed 00:17:10.970 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 12:31:16.603736] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.970 [2024-11-20 12:31:16.606750] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.970 passed 00:17:10.970 Test: admin_identify_ns ...[2024-11-20 12:31:16.680220] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.229 [2024-11-20 12:31:16.739421] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:11.229 [2024-11-20 12:31:16.747421] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:11.229 [2024-11-20 12:31:16.768513] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.229 passed 00:17:11.229 Test: admin_get_features_mandatory_features ...[2024-11-20 12:31:16.843133] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.229 [2024-11-20 12:31:16.846151] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.229 passed 00:17:11.229 Test: admin_get_features_optional_features ...[2024-11-20 12:31:16.918630] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.229 [2024-11-20 12:31:16.921648] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.229 passed 00:17:11.489 Test: admin_set_features_number_of_queues ...[2024-11-20 12:31:16.996206] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.489 [2024-11-20 12:31:17.101499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.489 passed 00:17:11.489 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 12:31:17.171290] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.489 [2024-11-20 12:31:17.174314] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.489 passed 00:17:11.489 Test: admin_get_log_page_with_lpo ...[2024-11-20 12:31:17.250263] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.748 [2024-11-20 12:31:17.322419] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:11.748 [2024-11-20 12:31:17.332707] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.748 passed 00:17:11.748 Test: fabric_property_get ...[2024-11-20 12:31:17.405421] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.748 [2024-11-20 12:31:17.406636] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:11.748 [2024-11-20 12:31:17.408442] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.748 passed 00:17:11.748 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 12:31:17.479894] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.748 [2024-11-20 12:31:17.481118] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:11.748 [2024-11-20 12:31:17.482915] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.007 passed 00:17:12.007 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 12:31:17.556193] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.007 [2024-11-20 12:31:17.639426] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:12.007 [2024-11-20 12:31:17.655421] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:12.007 [2024-11-20 12:31:17.660519] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.007 passed 00:17:12.007 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 12:31:17.735234] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.007 [2024-11-20 12:31:17.736452] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:12.007 [2024-11-20 12:31:17.738251] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.007 passed 00:17:12.267 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 12:31:17.810444] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.267 [2024-11-20 12:31:17.888424] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:12.267 [2024-11-20 12:31:17.912417] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:12.267 [2024-11-20 12:31:17.917503] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.267 passed 00:17:12.267 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 12:31:17.987268] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.267 [2024-11-20 12:31:17.988494] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:12.267 [2024-11-20 12:31:17.988515] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:12.267 [2024-11-20 12:31:17.990291] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.267 passed 00:17:12.526 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 12:31:18.063468] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.526 [2024-11-20 12:31:18.156416] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:12.526 [2024-11-20 12:31:18.164415] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:12.526 [2024-11-20 12:31:18.172418] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:12.526 [2024-11-20 12:31:18.180419] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:12.526 [2024-11-20 12:31:18.209493] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.526 passed 00:17:12.526 Test: admin_create_io_sq_verify_pc ...[2024-11-20 12:31:18.284061] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.785 [2024-11-20 12:31:18.300424] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:12.785 [2024-11-20 12:31:18.318228] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.785 passed 00:17:12.785 Test: admin_create_io_qp_max_qps ...[2024-11-20 12:31:18.387724] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.724 [2024-11-20 12:31:19.473422] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:14.292 [2024-11-20 12:31:19.861767] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.292 passed 00:17:14.292 Test: admin_create_io_sq_shared_cq ...[2024-11-20 12:31:19.936047] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.551 [2024-11-20 12:31:20.067417] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:14.551 [2024-11-20 12:31:20.104477] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.551 passed 00:17:14.551 00:17:14.551 Run Summary: Type Total Ran Passed Failed Inactive 00:17:14.551 suites 1 1 n/a 0 0 00:17:14.551 tests 18 18 18 0 0 00:17:14.551 asserts 360 360 360 0 n/a 00:17:14.551 00:17:14.551 Elapsed time = 1.466 seconds 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 901406 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 901406 ']' 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 901406 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901406 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901406' 00:17:14.551 killing process with pid 901406 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 901406 00:17:14.551 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 901406 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:14.810 00:17:14.810 real 0m6.149s 00:17:14.810 user 0m17.485s 00:17:14.810 sys 0m0.550s 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.810 ************************************ 00:17:14.810 END TEST nvmf_vfio_user_nvme_compliance 00:17:14.810 ************************************ 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.810 ************************************ 00:17:14.810 START TEST nvmf_vfio_user_fuzz 00:17:14.810 ************************************ 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:14.810 * Looking for test storage... 00:17:14.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:14.810 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:15.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.070 --rc genhtml_branch_coverage=1 00:17:15.070 --rc genhtml_function_coverage=1 00:17:15.070 --rc genhtml_legend=1 00:17:15.070 --rc geninfo_all_blocks=1 00:17:15.070 --rc geninfo_unexecuted_blocks=1 00:17:15.070 00:17:15.070 ' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:15.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.070 --rc genhtml_branch_coverage=1 00:17:15.070 --rc genhtml_function_coverage=1 00:17:15.070 --rc genhtml_legend=1 00:17:15.070 --rc geninfo_all_blocks=1 00:17:15.070 --rc geninfo_unexecuted_blocks=1 00:17:15.070 00:17:15.070 ' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:15.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.070 --rc genhtml_branch_coverage=1 00:17:15.070 --rc genhtml_function_coverage=1 00:17:15.070 --rc genhtml_legend=1 00:17:15.070 --rc geninfo_all_blocks=1 00:17:15.070 --rc geninfo_unexecuted_blocks=1 00:17:15.070 00:17:15.070 ' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:15.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.070 --rc genhtml_branch_coverage=1 00:17:15.070 --rc genhtml_function_coverage=1 00:17:15.070 --rc genhtml_legend=1 00:17:15.070 --rc geninfo_all_blocks=1 00:17:15.070 --rc geninfo_unexecuted_blocks=1 00:17:15.070 00:17:15.070 ' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.070 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:15.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=902587 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 902587' 00:17:15.071 Process pid: 902587 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 902587 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 902587 ']' 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.071 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.329 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.330 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:15.330 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:16.267 malloc0 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:16.267 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:48.513 Fuzzing completed. Shutting down the fuzz application 00:17:48.513 00:17:48.513 Dumping successful admin opcodes: 00:17:48.513 8, 9, 10, 24, 00:17:48.513 Dumping successful io opcodes: 00:17:48.513 0, 00:17:48.513 NS: 0x20000081ef00 I/O qp, Total commands completed: 1072737, total successful commands: 4230, random_seed: 411680896 00:17:48.513 NS: 0x20000081ef00 admin qp, Total commands completed: 263423, total successful commands: 2119, random_seed: 2369855296 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 902587 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 902587 ']' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 902587 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902587 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902587' 00:17:48.513 killing process with pid 902587 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 902587 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 902587 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:48.513 00:17:48.513 real 0m32.172s 00:17:48.513 user 0m29.464s 00:17:48.513 sys 0m31.640s 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 ************************************ 00:17:48.513 END TEST nvmf_vfio_user_fuzz 00:17:48.513 ************************************ 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 ************************************ 00:17:48.513 START TEST nvmf_auth_target 00:17:48.513 ************************************ 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:48.513 * Looking for test storage... 00:17:48.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.513 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.513 --rc genhtml_branch_coverage=1 00:17:48.513 --rc genhtml_function_coverage=1 00:17:48.513 --rc genhtml_legend=1 00:17:48.513 --rc geninfo_all_blocks=1 00:17:48.514 --rc geninfo_unexecuted_blocks=1 00:17:48.514 00:17:48.514 ' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:48.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.514 --rc genhtml_branch_coverage=1 00:17:48.514 --rc genhtml_function_coverage=1 00:17:48.514 --rc genhtml_legend=1 00:17:48.514 --rc geninfo_all_blocks=1 00:17:48.514 --rc geninfo_unexecuted_blocks=1 00:17:48.514 00:17:48.514 ' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:48.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.514 --rc genhtml_branch_coverage=1 00:17:48.514 --rc genhtml_function_coverage=1 00:17:48.514 --rc genhtml_legend=1 00:17:48.514 --rc geninfo_all_blocks=1 00:17:48.514 --rc geninfo_unexecuted_blocks=1 00:17:48.514 00:17:48.514 ' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:48.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.514 --rc genhtml_branch_coverage=1 00:17:48.514 --rc genhtml_function_coverage=1 00:17:48.514 --rc genhtml_legend=1 00:17:48.514 --rc geninfo_all_blocks=1 00:17:48.514 --rc geninfo_unexecuted_blocks=1 00:17:48.514 00:17:48.514 ' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:48.514 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:17:53.792 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:17:53.792 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:17:53.792 Found net devices under 0000:1a:00.0: cvl_0_0 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:17:53.792 Found net devices under 0000:1a:00.1: cvl_0_1 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:53.792 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.792 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.792 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.792 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:53.792 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:53.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:17:53.793 00:17:53.793 --- 10.0.0.2 ping statistics --- 00:17:53.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.793 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:17:53.793 00:17:53.793 --- 10.0.0.1 ping statistics --- 00:17:53.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.793 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=911499 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 911499 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 911499 ']' 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.793 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.362 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.362 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:54.362 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.362 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.362 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=911774 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=351166710fbd96b1b4e363a3483a93404066625aaa448b75 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qxN 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 351166710fbd96b1b4e363a3483a93404066625aaa448b75 0 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 351166710fbd96b1b4e363a3483a93404066625aaa448b75 0 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=351166710fbd96b1b4e363a3483a93404066625aaa448b75 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qxN 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qxN 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qxN 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe75b1166381629d33ba9db6e72108f1249c6583c73d17cf2991488c33c4a3d9 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ur0 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe75b1166381629d33ba9db6e72108f1249c6583c73d17cf2991488c33c4a3d9 3 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe75b1166381629d33ba9db6e72108f1249c6583c73d17cf2991488c33c4a3d9 3 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe75b1166381629d33ba9db6e72108f1249c6583c73d17cf2991488c33c4a3d9 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:54.362 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ur0 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ur0 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ur0 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5eb2bd4f4910d2eef09a62058e9b098c 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6Qc 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5eb2bd4f4910d2eef09a62058e9b098c 1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5eb2bd4f4910d2eef09a62058e9b098c 1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5eb2bd4f4910d2eef09a62058e9b098c 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6Qc 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6Qc 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.6Qc 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b851ecdc302a3e0477eeba625fa89874e0aec7351c6757ba 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.T3N 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b851ecdc302a3e0477eeba625fa89874e0aec7351c6757ba 2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b851ecdc302a3e0477eeba625fa89874e0aec7351c6757ba 2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b851ecdc302a3e0477eeba625fa89874e0aec7351c6757ba 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.T3N 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.T3N 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.T3N 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e237c44c948e5be7ec319444e38ecda8d44eae6100f2cb2e 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6GS 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e237c44c948e5be7ec319444e38ecda8d44eae6100f2cb2e 2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e237c44c948e5be7ec319444e38ecda8d44eae6100f2cb2e 2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e237c44c948e5be7ec319444e38ecda8d44eae6100f2cb2e 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6GS 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6GS 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.6GS 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81abd70c82e09b2606df30bc1e2add9d 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Lp2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81abd70c82e09b2606df30bc1e2add9d 1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81abd70c82e09b2606df30bc1e2add9d 1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81abd70c82e09b2606df30bc1e2add9d 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Lp2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Lp2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Lp2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0cb82d34f5e68c56b7c525f26f8198982b8f9b36cc4a58775af74f0ac02e782b 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4r2 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0cb82d34f5e68c56b7c525f26f8198982b8f9b36cc4a58775af74f0ac02e782b 3 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0cb82d34f5e68c56b7c525f26f8198982b8f9b36cc4a58775af74f0ac02e782b 3 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:54.622 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0cb82d34f5e68c56b7c525f26f8198982b8f9b36cc4a58775af74f0ac02e782b 00:17:54.623 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:54.623 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4r2 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4r2 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.4r2 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 911499 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 911499 ']' 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 911774 /var/tmp/host.sock 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 911774 ']' 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:54.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.884 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qxN 00:17:55.144 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.145 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.145 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.145 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qxN 00:17:55.145 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qxN 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ur0 ]] 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur0 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur0 00:17:55.404 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur0 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6Qc 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6Qc 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6Qc 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.T3N ]] 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T3N 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T3N 00:17:55.662 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T3N 00:17:55.920 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:55.921 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6GS 00:17:55.921 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.921 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.921 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.921 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6GS 00:17:55.921 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6GS 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Lp2 ]] 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lp2 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lp2 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lp2 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4r2 00:17:56.180 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.181 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.181 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.181 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4r2 00:17:56.181 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4r2 00:17:56.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:56.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:56.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.699 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.959 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.959 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.217 { 00:17:57.217 "cntlid": 1, 00:17:57.217 "qid": 0, 00:17:57.217 "state": "enabled", 00:17:57.217 "thread": "nvmf_tgt_poll_group_000", 00:17:57.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:17:57.217 "listen_address": { 00:17:57.217 "trtype": "TCP", 00:17:57.217 "adrfam": "IPv4", 00:17:57.217 "traddr": "10.0.0.2", 00:17:57.217 "trsvcid": "4420" 00:17:57.217 }, 00:17:57.217 "peer_address": { 00:17:57.217 "trtype": "TCP", 00:17:57.217 "adrfam": "IPv4", 00:17:57.217 "traddr": "10.0.0.1", 00:17:57.217 "trsvcid": "58444" 00:17:57.217 }, 00:17:57.217 "auth": { 00:17:57.217 "state": "completed", 00:17:57.217 "digest": "sha256", 00:17:57.217 "dhgroup": "null" 00:17:57.217 } 00:17:57.217 } 00:17:57.217 ]' 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.217 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.476 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:17:57.476 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.045 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.304 00:17:58.304 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.304 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.304 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.563 { 00:17:58.563 "cntlid": 3, 00:17:58.563 "qid": 0, 00:17:58.563 "state": "enabled", 00:17:58.563 "thread": "nvmf_tgt_poll_group_000", 00:17:58.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:17:58.563 "listen_address": { 00:17:58.563 "trtype": "TCP", 00:17:58.563 "adrfam": "IPv4", 00:17:58.563 "traddr": "10.0.0.2", 00:17:58.563 "trsvcid": "4420" 00:17:58.563 }, 00:17:58.563 "peer_address": { 00:17:58.563 "trtype": "TCP", 00:17:58.563 "adrfam": "IPv4", 00:17:58.563 "traddr": "10.0.0.1", 00:17:58.563 "trsvcid": "58468" 00:17:58.563 }, 00:17:58.563 "auth": { 00:17:58.563 "state": "completed", 00:17:58.563 "digest": "sha256", 00:17:58.563 "dhgroup": "null" 00:17:58.563 } 00:17:58.563 } 00:17:58.563 ]' 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.563 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.822 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:17:58.822 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.389 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.648 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.909 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.909 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.168 { 00:18:00.168 "cntlid": 5, 00:18:00.168 "qid": 0, 00:18:00.168 "state": "enabled", 00:18:00.168 "thread": "nvmf_tgt_poll_group_000", 00:18:00.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:00.168 "listen_address": { 00:18:00.168 "trtype": "TCP", 00:18:00.168 "adrfam": "IPv4", 00:18:00.168 "traddr": "10.0.0.2", 00:18:00.168 "trsvcid": "4420" 00:18:00.168 }, 00:18:00.168 "peer_address": { 00:18:00.168 "trtype": "TCP", 00:18:00.168 "adrfam": "IPv4", 00:18:00.168 "traddr": "10.0.0.1", 00:18:00.168 "trsvcid": "58504" 00:18:00.168 }, 00:18:00.168 "auth": { 00:18:00.168 "state": "completed", 00:18:00.168 "digest": "sha256", 00:18:00.168 "dhgroup": "null" 00:18:00.168 } 00:18:00.168 } 00:18:00.168 ]' 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.168 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.427 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:00.427 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:00.995 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.995 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:00.995 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.995 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.995 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.995 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.996 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.255 00:18:01.255 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.255 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.255 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.514 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.514 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.514 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.514 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.514 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.514 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.514 { 00:18:01.514 "cntlid": 7, 00:18:01.514 "qid": 0, 00:18:01.515 "state": "enabled", 00:18:01.515 "thread": "nvmf_tgt_poll_group_000", 00:18:01.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:01.515 "listen_address": { 00:18:01.515 "trtype": "TCP", 00:18:01.515 "adrfam": "IPv4", 00:18:01.515 "traddr": "10.0.0.2", 00:18:01.515 "trsvcid": "4420" 00:18:01.515 }, 00:18:01.515 "peer_address": { 00:18:01.515 "trtype": "TCP", 00:18:01.515 "adrfam": "IPv4", 00:18:01.515 "traddr": "10.0.0.1", 00:18:01.515 "trsvcid": "58538" 00:18:01.515 }, 00:18:01.515 "auth": { 00:18:01.515 "state": "completed", 00:18:01.515 "digest": "sha256", 00:18:01.515 "dhgroup": "null" 00:18:01.515 } 00:18:01.515 } 00:18:01.515 ]' 00:18:01.515 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.515 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.515 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.515 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.515 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.773 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.773 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.773 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.773 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:01.773 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.340 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.599 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.858 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.858 { 00:18:02.858 "cntlid": 9, 00:18:02.858 "qid": 0, 00:18:02.858 "state": "enabled", 00:18:02.858 "thread": "nvmf_tgt_poll_group_000", 00:18:02.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:02.858 "listen_address": { 00:18:02.858 "trtype": "TCP", 00:18:02.858 "adrfam": "IPv4", 00:18:02.858 "traddr": "10.0.0.2", 00:18:02.858 "trsvcid": "4420" 00:18:02.858 }, 00:18:02.858 "peer_address": { 00:18:02.858 "trtype": "TCP", 00:18:02.858 "adrfam": "IPv4", 00:18:02.858 "traddr": "10.0.0.1", 00:18:02.858 "trsvcid": "32950" 00:18:02.858 }, 00:18:02.858 "auth": { 00:18:02.858 "state": "completed", 00:18:02.858 "digest": "sha256", 00:18:02.858 "dhgroup": "ffdhe2048" 00:18:02.858 } 00:18:02.858 } 00:18:02.858 ]' 00:18:02.858 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.118 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.376 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:03.376 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.943 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.202 00:18:04.202 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.202 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.202 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.460 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.460 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.460 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.460 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.460 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.461 { 00:18:04.461 "cntlid": 11, 00:18:04.461 "qid": 0, 00:18:04.461 "state": "enabled", 00:18:04.461 "thread": "nvmf_tgt_poll_group_000", 00:18:04.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:04.461 "listen_address": { 00:18:04.461 "trtype": "TCP", 00:18:04.461 "adrfam": "IPv4", 00:18:04.461 "traddr": "10.0.0.2", 00:18:04.461 "trsvcid": "4420" 00:18:04.461 }, 00:18:04.461 "peer_address": { 00:18:04.461 "trtype": "TCP", 00:18:04.461 "adrfam": "IPv4", 00:18:04.461 "traddr": "10.0.0.1", 00:18:04.461 "trsvcid": "32982" 00:18:04.461 }, 00:18:04.461 "auth": { 00:18:04.461 "state": "completed", 00:18:04.461 "digest": "sha256", 00:18:04.461 "dhgroup": "ffdhe2048" 00:18:04.461 } 00:18:04.461 } 00:18:04.461 ]' 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.461 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.719 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:04.719 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.286 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.545 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:05.545 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.545 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.546 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.804 00:18:05.804 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.804 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.804 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.062 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.062 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.062 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.062 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.062 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.062 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.062 { 00:18:06.062 "cntlid": 13, 00:18:06.062 "qid": 0, 00:18:06.062 "state": "enabled", 00:18:06.062 "thread": "nvmf_tgt_poll_group_000", 00:18:06.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:06.062 "listen_address": { 00:18:06.062 "trtype": "TCP", 00:18:06.062 "adrfam": "IPv4", 00:18:06.062 "traddr": "10.0.0.2", 00:18:06.062 "trsvcid": "4420" 00:18:06.062 }, 00:18:06.062 "peer_address": { 00:18:06.062 "trtype": "TCP", 00:18:06.062 "adrfam": "IPv4", 00:18:06.062 "traddr": "10.0.0.1", 00:18:06.063 "trsvcid": "33018" 00:18:06.063 }, 00:18:06.063 "auth": { 00:18:06.063 "state": "completed", 00:18:06.063 "digest": "sha256", 00:18:06.063 "dhgroup": "ffdhe2048" 00:18:06.063 } 00:18:06.063 } 00:18:06.063 ]' 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.063 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.320 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:06.320 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.888 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.146 00:18:07.146 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.146 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.146 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.404 { 00:18:07.404 "cntlid": 15, 00:18:07.404 "qid": 0, 00:18:07.404 "state": "enabled", 00:18:07.404 "thread": "nvmf_tgt_poll_group_000", 00:18:07.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:07.404 "listen_address": { 00:18:07.404 "trtype": "TCP", 00:18:07.404 "adrfam": "IPv4", 00:18:07.404 "traddr": "10.0.0.2", 00:18:07.404 "trsvcid": "4420" 00:18:07.404 }, 00:18:07.404 "peer_address": { 00:18:07.404 "trtype": "TCP", 00:18:07.404 "adrfam": "IPv4", 00:18:07.404 "traddr": "10.0.0.1", 00:18:07.404 "trsvcid": "33046" 00:18:07.404 }, 00:18:07.404 "auth": { 00:18:07.404 "state": "completed", 00:18:07.404 "digest": "sha256", 00:18:07.404 "dhgroup": "ffdhe2048" 00:18:07.404 } 00:18:07.404 } 00:18:07.404 ]' 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.404 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.662 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.662 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.662 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.662 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:07.663 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:08.228 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.228 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.229 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.487 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.745 00:18:08.745 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.745 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.745 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.004 { 00:18:09.004 "cntlid": 17, 00:18:09.004 "qid": 0, 00:18:09.004 "state": "enabled", 00:18:09.004 "thread": "nvmf_tgt_poll_group_000", 00:18:09.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:09.004 "listen_address": { 00:18:09.004 "trtype": "TCP", 00:18:09.004 "adrfam": "IPv4", 00:18:09.004 "traddr": "10.0.0.2", 00:18:09.004 "trsvcid": "4420" 00:18:09.004 }, 00:18:09.004 "peer_address": { 00:18:09.004 "trtype": "TCP", 00:18:09.004 "adrfam": "IPv4", 00:18:09.004 "traddr": "10.0.0.1", 00:18:09.004 "trsvcid": "33072" 00:18:09.004 }, 00:18:09.004 "auth": { 00:18:09.004 "state": "completed", 00:18:09.004 "digest": "sha256", 00:18:09.004 "dhgroup": "ffdhe3072" 00:18:09.004 } 00:18:09.004 } 00:18:09.004 ]' 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.004 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.263 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:09.263 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.830 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.089 00:18:10.089 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.089 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.089 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.348 { 00:18:10.348 "cntlid": 19, 00:18:10.348 "qid": 0, 00:18:10.348 "state": "enabled", 00:18:10.348 "thread": "nvmf_tgt_poll_group_000", 00:18:10.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:10.348 "listen_address": { 00:18:10.348 "trtype": "TCP", 00:18:10.348 "adrfam": "IPv4", 00:18:10.348 "traddr": "10.0.0.2", 00:18:10.348 "trsvcid": "4420" 00:18:10.348 }, 00:18:10.348 "peer_address": { 00:18:10.348 "trtype": "TCP", 00:18:10.348 "adrfam": "IPv4", 00:18:10.348 "traddr": "10.0.0.1", 00:18:10.348 "trsvcid": "33096" 00:18:10.348 }, 00:18:10.348 "auth": { 00:18:10.348 "state": "completed", 00:18:10.348 "digest": "sha256", 00:18:10.348 "dhgroup": "ffdhe3072" 00:18:10.348 } 00:18:10.348 } 00:18:10.348 ]' 00:18:10.348 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.348 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.607 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:10.607 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.176 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.436 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.694 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.694 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.953 { 00:18:11.953 "cntlid": 21, 00:18:11.953 "qid": 0, 00:18:11.953 "state": "enabled", 00:18:11.953 "thread": "nvmf_tgt_poll_group_000", 00:18:11.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:11.953 "listen_address": { 00:18:11.953 "trtype": "TCP", 00:18:11.953 "adrfam": "IPv4", 00:18:11.953 "traddr": "10.0.0.2", 00:18:11.953 "trsvcid": "4420" 00:18:11.953 }, 00:18:11.953 "peer_address": { 00:18:11.953 "trtype": "TCP", 00:18:11.953 "adrfam": "IPv4", 00:18:11.953 "traddr": "10.0.0.1", 00:18:11.953 "trsvcid": "33116" 00:18:11.953 }, 00:18:11.953 "auth": { 00:18:11.953 "state": "completed", 00:18:11.953 "digest": "sha256", 00:18:11.953 "dhgroup": "ffdhe3072" 00:18:11.953 } 00:18:11.953 } 00:18:11.953 ]' 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.953 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.213 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:12.213 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.782 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.783 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.783 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.783 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.042 00:18:13.042 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.042 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.042 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.301 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.301 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.301 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.301 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.301 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.301 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.301 { 00:18:13.301 "cntlid": 23, 00:18:13.301 "qid": 0, 00:18:13.301 "state": "enabled", 00:18:13.301 "thread": "nvmf_tgt_poll_group_000", 00:18:13.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:13.302 "listen_address": { 00:18:13.302 "trtype": "TCP", 00:18:13.302 "adrfam": "IPv4", 00:18:13.302 "traddr": "10.0.0.2", 00:18:13.302 "trsvcid": "4420" 00:18:13.302 }, 00:18:13.302 "peer_address": { 00:18:13.302 "trtype": "TCP", 00:18:13.302 "adrfam": "IPv4", 00:18:13.302 "traddr": "10.0.0.1", 00:18:13.302 "trsvcid": "35640" 00:18:13.302 }, 00:18:13.302 "auth": { 00:18:13.302 "state": "completed", 00:18:13.302 "digest": "sha256", 00:18:13.302 "dhgroup": "ffdhe3072" 00:18:13.302 } 00:18:13.302 } 00:18:13.302 ]' 00:18:13.302 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.302 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.302 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.302 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.302 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.560 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.560 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.560 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.560 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:13.560 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.128 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.388 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.646 00:18:14.646 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.646 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.646 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.903 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.903 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.903 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.903 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.903 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.904 { 00:18:14.904 "cntlid": 25, 00:18:14.904 "qid": 0, 00:18:14.904 "state": "enabled", 00:18:14.904 "thread": "nvmf_tgt_poll_group_000", 00:18:14.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:14.904 "listen_address": { 00:18:14.904 "trtype": "TCP", 00:18:14.904 "adrfam": "IPv4", 00:18:14.904 "traddr": "10.0.0.2", 00:18:14.904 "trsvcid": "4420" 00:18:14.904 }, 00:18:14.904 "peer_address": { 00:18:14.904 "trtype": "TCP", 00:18:14.904 "adrfam": "IPv4", 00:18:14.904 "traddr": "10.0.0.1", 00:18:14.904 "trsvcid": "35674" 00:18:14.904 }, 00:18:14.904 "auth": { 00:18:14.904 "state": "completed", 00:18:14.904 "digest": "sha256", 00:18:14.904 "dhgroup": "ffdhe4096" 00:18:14.904 } 00:18:14.904 } 00:18:14.904 ]' 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.904 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.162 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:15.162 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.731 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.990 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.249 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.249 { 00:18:16.249 "cntlid": 27, 00:18:16.249 "qid": 0, 00:18:16.249 "state": "enabled", 00:18:16.249 "thread": "nvmf_tgt_poll_group_000", 00:18:16.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:16.249 "listen_address": { 00:18:16.249 "trtype": "TCP", 00:18:16.249 "adrfam": "IPv4", 00:18:16.249 "traddr": "10.0.0.2", 00:18:16.249 "trsvcid": "4420" 00:18:16.249 }, 00:18:16.249 "peer_address": { 00:18:16.249 "trtype": "TCP", 00:18:16.249 "adrfam": "IPv4", 00:18:16.249 "traddr": "10.0.0.1", 00:18:16.249 "trsvcid": "35700" 00:18:16.249 }, 00:18:16.249 "auth": { 00:18:16.249 "state": "completed", 00:18:16.249 "digest": "sha256", 00:18:16.249 "dhgroup": "ffdhe4096" 00:18:16.249 } 00:18:16.249 } 00:18:16.249 ]' 00:18:16.249 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.508 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.769 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:16.769 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:17.337 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.337 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.338 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.338 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.596 00:18:17.596 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.596 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.596 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.855 { 00:18:17.855 "cntlid": 29, 00:18:17.855 "qid": 0, 00:18:17.855 "state": "enabled", 00:18:17.855 "thread": "nvmf_tgt_poll_group_000", 00:18:17.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:17.855 "listen_address": { 00:18:17.855 "trtype": "TCP", 00:18:17.855 "adrfam": "IPv4", 00:18:17.855 "traddr": "10.0.0.2", 00:18:17.855 "trsvcid": "4420" 00:18:17.855 }, 00:18:17.855 "peer_address": { 00:18:17.855 "trtype": "TCP", 00:18:17.855 "adrfam": "IPv4", 00:18:17.855 "traddr": "10.0.0.1", 00:18:17.855 "trsvcid": "35710" 00:18:17.855 }, 00:18:17.855 "auth": { 00:18:17.855 "state": "completed", 00:18:17.855 "digest": "sha256", 00:18:17.855 "dhgroup": "ffdhe4096" 00:18:17.855 } 00:18:17.855 } 00:18:17.855 ]' 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.855 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.114 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:18.114 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:18.681 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.940 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.199 00:18:19.199 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.199 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.199 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.458 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.458 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.458 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.458 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.458 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.458 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.458 { 00:18:19.458 "cntlid": 31, 00:18:19.458 "qid": 0, 00:18:19.459 "state": "enabled", 00:18:19.459 "thread": "nvmf_tgt_poll_group_000", 00:18:19.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:19.459 "listen_address": { 00:18:19.459 "trtype": "TCP", 00:18:19.459 "adrfam": "IPv4", 00:18:19.459 "traddr": "10.0.0.2", 00:18:19.459 "trsvcid": "4420" 00:18:19.459 }, 00:18:19.459 "peer_address": { 00:18:19.459 "trtype": "TCP", 00:18:19.459 "adrfam": "IPv4", 00:18:19.459 "traddr": "10.0.0.1", 00:18:19.459 "trsvcid": "35748" 00:18:19.459 }, 00:18:19.459 "auth": { 00:18:19.459 "state": "completed", 00:18:19.459 "digest": "sha256", 00:18:19.459 "dhgroup": "ffdhe4096" 00:18:19.459 } 00:18:19.459 } 00:18:19.459 ]' 00:18:19.459 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.459 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.718 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:19.718 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.285 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.285 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.853 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.853 { 00:18:20.853 "cntlid": 33, 00:18:20.853 "qid": 0, 00:18:20.853 "state": "enabled", 00:18:20.853 "thread": "nvmf_tgt_poll_group_000", 00:18:20.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:20.853 "listen_address": { 00:18:20.853 "trtype": "TCP", 00:18:20.853 "adrfam": "IPv4", 00:18:20.853 "traddr": "10.0.0.2", 00:18:20.853 "trsvcid": "4420" 00:18:20.853 }, 00:18:20.853 "peer_address": { 00:18:20.853 "trtype": "TCP", 00:18:20.853 "adrfam": "IPv4", 00:18:20.853 "traddr": "10.0.0.1", 00:18:20.853 "trsvcid": "35784" 00:18:20.853 }, 00:18:20.853 "auth": { 00:18:20.853 "state": "completed", 00:18:20.853 "digest": "sha256", 00:18:20.853 "dhgroup": "ffdhe6144" 00:18:20.853 } 00:18:20.853 } 00:18:20.853 ]' 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.853 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.111 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.111 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.111 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.111 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.112 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.112 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:21.112 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.679 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.938 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.939 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.939 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.198 00:18:22.198 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.198 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.198 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.458 { 00:18:22.458 "cntlid": 35, 00:18:22.458 "qid": 0, 00:18:22.458 "state": "enabled", 00:18:22.458 "thread": "nvmf_tgt_poll_group_000", 00:18:22.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:22.458 "listen_address": { 00:18:22.458 "trtype": "TCP", 00:18:22.458 "adrfam": "IPv4", 00:18:22.458 "traddr": "10.0.0.2", 00:18:22.458 "trsvcid": "4420" 00:18:22.458 }, 00:18:22.458 "peer_address": { 00:18:22.458 "trtype": "TCP", 00:18:22.458 "adrfam": "IPv4", 00:18:22.458 "traddr": "10.0.0.1", 00:18:22.458 "trsvcid": "40500" 00:18:22.458 }, 00:18:22.458 "auth": { 00:18:22.458 "state": "completed", 00:18:22.458 "digest": "sha256", 00:18:22.458 "dhgroup": "ffdhe6144" 00:18:22.458 } 00:18:22.458 } 00:18:22.458 ]' 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.458 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.718 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.718 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.718 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.718 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:22.718 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:23.287 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.287 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:23.288 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.288 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.288 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.288 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.288 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.288 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.548 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.807 00:18:23.807 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.807 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.808 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.067 { 00:18:24.067 "cntlid": 37, 00:18:24.067 "qid": 0, 00:18:24.067 "state": "enabled", 00:18:24.067 "thread": "nvmf_tgt_poll_group_000", 00:18:24.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:24.067 "listen_address": { 00:18:24.067 "trtype": "TCP", 00:18:24.067 "adrfam": "IPv4", 00:18:24.067 "traddr": "10.0.0.2", 00:18:24.067 "trsvcid": "4420" 00:18:24.067 }, 00:18:24.067 "peer_address": { 00:18:24.067 "trtype": "TCP", 00:18:24.067 "adrfam": "IPv4", 00:18:24.067 "traddr": "10.0.0.1", 00:18:24.067 "trsvcid": "40522" 00:18:24.067 }, 00:18:24.067 "auth": { 00:18:24.067 "state": "completed", 00:18:24.067 "digest": "sha256", 00:18:24.067 "dhgroup": "ffdhe6144" 00:18:24.067 } 00:18:24.067 } 00:18:24.067 ]' 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.067 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.326 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:24.327 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.894 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.153 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.413 00:18:25.413 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.413 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.413 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.673 { 00:18:25.673 "cntlid": 39, 00:18:25.673 "qid": 0, 00:18:25.673 "state": "enabled", 00:18:25.673 "thread": "nvmf_tgt_poll_group_000", 00:18:25.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:25.673 "listen_address": { 00:18:25.673 "trtype": "TCP", 00:18:25.673 "adrfam": "IPv4", 00:18:25.673 "traddr": "10.0.0.2", 00:18:25.673 "trsvcid": "4420" 00:18:25.673 }, 00:18:25.673 "peer_address": { 00:18:25.673 "trtype": "TCP", 00:18:25.673 "adrfam": "IPv4", 00:18:25.673 "traddr": "10.0.0.1", 00:18:25.673 "trsvcid": "40532" 00:18:25.673 }, 00:18:25.673 "auth": { 00:18:25.673 "state": "completed", 00:18:25.673 "digest": "sha256", 00:18:25.673 "dhgroup": "ffdhe6144" 00:18:25.673 } 00:18:25.673 } 00:18:25.673 ]' 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.673 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.969 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:25.969 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.609 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.610 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.177 00:18:27.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.435 { 00:18:27.435 "cntlid": 41, 00:18:27.435 "qid": 0, 00:18:27.435 "state": "enabled", 00:18:27.435 "thread": "nvmf_tgt_poll_group_000", 00:18:27.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:27.435 "listen_address": { 00:18:27.435 "trtype": "TCP", 00:18:27.435 "adrfam": "IPv4", 00:18:27.435 "traddr": "10.0.0.2", 00:18:27.435 "trsvcid": "4420" 00:18:27.435 }, 00:18:27.435 "peer_address": { 00:18:27.435 "trtype": "TCP", 00:18:27.435 "adrfam": "IPv4", 00:18:27.435 "traddr": "10.0.0.1", 00:18:27.435 "trsvcid": "40552" 00:18:27.435 }, 00:18:27.435 "auth": { 00:18:27.435 "state": "completed", 00:18:27.435 "digest": "sha256", 00:18:27.435 "dhgroup": "ffdhe8192" 00:18:27.435 } 00:18:27.435 } 00:18:27.435 ]' 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.435 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.435 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.435 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.435 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.435 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.435 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.694 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:27.694 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.261 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.828 00:18:28.828 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.828 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.828 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.087 { 00:18:29.087 "cntlid": 43, 00:18:29.087 "qid": 0, 00:18:29.087 "state": "enabled", 00:18:29.087 "thread": "nvmf_tgt_poll_group_000", 00:18:29.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:29.087 "listen_address": { 00:18:29.087 "trtype": "TCP", 00:18:29.087 "adrfam": "IPv4", 00:18:29.087 "traddr": "10.0.0.2", 00:18:29.087 "trsvcid": "4420" 00:18:29.087 }, 00:18:29.087 "peer_address": { 00:18:29.087 "trtype": "TCP", 00:18:29.087 "adrfam": "IPv4", 00:18:29.087 "traddr": "10.0.0.1", 00:18:29.087 "trsvcid": "40580" 00:18:29.087 }, 00:18:29.087 "auth": { 00:18:29.087 "state": "completed", 00:18:29.087 "digest": "sha256", 00:18:29.087 "dhgroup": "ffdhe8192" 00:18:29.087 } 00:18:29.087 } 00:18:29.087 ]' 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.087 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.346 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:29.346 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.916 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.486 00:18:30.486 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.486 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.486 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.745 { 00:18:30.745 "cntlid": 45, 00:18:30.745 "qid": 0, 00:18:30.745 "state": "enabled", 00:18:30.745 "thread": "nvmf_tgt_poll_group_000", 00:18:30.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:30.745 "listen_address": { 00:18:30.745 "trtype": "TCP", 00:18:30.745 "adrfam": "IPv4", 00:18:30.745 "traddr": "10.0.0.2", 00:18:30.745 "trsvcid": "4420" 00:18:30.745 }, 00:18:30.745 "peer_address": { 00:18:30.745 "trtype": "TCP", 00:18:30.745 "adrfam": "IPv4", 00:18:30.745 "traddr": "10.0.0.1", 00:18:30.745 "trsvcid": "40602" 00:18:30.745 }, 00:18:30.745 "auth": { 00:18:30.745 "state": "completed", 00:18:30.745 "digest": "sha256", 00:18:30.745 "dhgroup": "ffdhe8192" 00:18:30.745 } 00:18:30.745 } 00:18:30.745 ]' 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.745 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.004 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:31.004 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.572 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.832 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.090 00:18:32.090 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.090 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.090 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.349 { 00:18:32.349 "cntlid": 47, 00:18:32.349 "qid": 0, 00:18:32.349 "state": "enabled", 00:18:32.349 "thread": "nvmf_tgt_poll_group_000", 00:18:32.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:32.349 "listen_address": { 00:18:32.349 "trtype": "TCP", 00:18:32.349 "adrfam": "IPv4", 00:18:32.349 "traddr": "10.0.0.2", 00:18:32.349 "trsvcid": "4420" 00:18:32.349 }, 00:18:32.349 "peer_address": { 00:18:32.349 "trtype": "TCP", 00:18:32.349 "adrfam": "IPv4", 00:18:32.349 "traddr": "10.0.0.1", 00:18:32.349 "trsvcid": "56288" 00:18:32.349 }, 00:18:32.349 "auth": { 00:18:32.349 "state": "completed", 00:18:32.349 "digest": "sha256", 00:18:32.349 "dhgroup": "ffdhe8192" 00:18:32.349 } 00:18:32.349 } 00:18:32.349 ]' 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.349 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:32.608 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.177 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.436 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.694 00:18:33.694 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.694 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.694 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.953 { 00:18:33.953 "cntlid": 49, 00:18:33.953 "qid": 0, 00:18:33.953 "state": "enabled", 00:18:33.953 "thread": "nvmf_tgt_poll_group_000", 00:18:33.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:33.953 "listen_address": { 00:18:33.953 "trtype": "TCP", 00:18:33.953 "adrfam": "IPv4", 00:18:33.953 "traddr": "10.0.0.2", 00:18:33.953 "trsvcid": "4420" 00:18:33.953 }, 00:18:33.953 "peer_address": { 00:18:33.953 "trtype": "TCP", 00:18:33.953 "adrfam": "IPv4", 00:18:33.953 "traddr": "10.0.0.1", 00:18:33.953 "trsvcid": "56298" 00:18:33.953 }, 00:18:33.953 "auth": { 00:18:33.953 "state": "completed", 00:18:33.953 "digest": "sha384", 00:18:33.953 "dhgroup": "null" 00:18:33.953 } 00:18:33.953 } 00:18:33.953 ]' 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.953 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.213 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:34.213 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.781 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.041 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.300 00:18:35.300 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.300 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.300 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.300 { 00:18:35.300 "cntlid": 51, 00:18:35.300 "qid": 0, 00:18:35.300 "state": "enabled", 00:18:35.300 "thread": "nvmf_tgt_poll_group_000", 00:18:35.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:35.300 "listen_address": { 00:18:35.300 "trtype": "TCP", 00:18:35.300 "adrfam": "IPv4", 00:18:35.300 "traddr": "10.0.0.2", 00:18:35.300 "trsvcid": "4420" 00:18:35.300 }, 00:18:35.300 "peer_address": { 00:18:35.300 "trtype": "TCP", 00:18:35.300 "adrfam": "IPv4", 00:18:35.300 "traddr": "10.0.0.1", 00:18:35.300 "trsvcid": "56320" 00:18:35.300 }, 00:18:35.300 "auth": { 00:18:35.300 "state": "completed", 00:18:35.300 "digest": "sha384", 00:18:35.300 "dhgroup": "null" 00:18:35.300 } 00:18:35.300 } 00:18:35.300 ]' 00:18:35.300 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.560 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.819 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:35.819 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.388 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.388 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.648 00:18:36.648 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.648 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.648 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.907 { 00:18:36.907 "cntlid": 53, 00:18:36.907 "qid": 0, 00:18:36.907 "state": "enabled", 00:18:36.907 "thread": "nvmf_tgt_poll_group_000", 00:18:36.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:36.907 "listen_address": { 00:18:36.907 "trtype": "TCP", 00:18:36.907 "adrfam": "IPv4", 00:18:36.907 "traddr": "10.0.0.2", 00:18:36.907 "trsvcid": "4420" 00:18:36.907 }, 00:18:36.907 "peer_address": { 00:18:36.907 "trtype": "TCP", 00:18:36.907 "adrfam": "IPv4", 00:18:36.907 "traddr": "10.0.0.1", 00:18:36.907 "trsvcid": "56346" 00:18:36.907 }, 00:18:36.907 "auth": { 00:18:36.907 "state": "completed", 00:18:36.907 "digest": "sha384", 00:18:36.907 "dhgroup": "null" 00:18:36.907 } 00:18:36.907 } 00:18:36.907 ]' 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.907 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.167 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:37.167 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.734 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.993 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.252 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.252 { 00:18:38.252 "cntlid": 55, 00:18:38.252 "qid": 0, 00:18:38.252 "state": "enabled", 00:18:38.252 "thread": "nvmf_tgt_poll_group_000", 00:18:38.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:38.252 "listen_address": { 00:18:38.252 "trtype": "TCP", 00:18:38.252 "adrfam": "IPv4", 00:18:38.252 "traddr": "10.0.0.2", 00:18:38.252 "trsvcid": "4420" 00:18:38.252 }, 00:18:38.252 "peer_address": { 00:18:38.252 "trtype": "TCP", 00:18:38.252 "adrfam": "IPv4", 00:18:38.252 "traddr": "10.0.0.1", 00:18:38.252 "trsvcid": "56366" 00:18:38.252 }, 00:18:38.252 "auth": { 00:18:38.252 "state": "completed", 00:18:38.252 "digest": "sha384", 00:18:38.252 "dhgroup": "null" 00:18:38.252 } 00:18:38.252 } 00:18:38.252 ]' 00:18:38.252 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.252 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:38.511 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.079 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.338 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.597 00:18:39.597 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.597 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.597 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.855 { 00:18:39.855 "cntlid": 57, 00:18:39.855 "qid": 0, 00:18:39.855 "state": "enabled", 00:18:39.855 "thread": "nvmf_tgt_poll_group_000", 00:18:39.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:39.855 "listen_address": { 00:18:39.855 "trtype": "TCP", 00:18:39.855 "adrfam": "IPv4", 00:18:39.855 "traddr": "10.0.0.2", 00:18:39.855 "trsvcid": "4420" 00:18:39.855 }, 00:18:39.855 "peer_address": { 00:18:39.855 "trtype": "TCP", 00:18:39.855 "adrfam": "IPv4", 00:18:39.855 "traddr": "10.0.0.1", 00:18:39.855 "trsvcid": "56394" 00:18:39.855 }, 00:18:39.855 "auth": { 00:18:39.855 "state": "completed", 00:18:39.855 "digest": "sha384", 00:18:39.855 "dhgroup": "ffdhe2048" 00:18:39.855 } 00:18:39.855 } 00:18:39.855 ]' 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.855 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.114 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:40.114 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:40.682 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.683 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.942 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.942 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.202 { 00:18:41.202 "cntlid": 59, 00:18:41.202 "qid": 0, 00:18:41.202 "state": "enabled", 00:18:41.202 "thread": "nvmf_tgt_poll_group_000", 00:18:41.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:41.202 "listen_address": { 00:18:41.202 "trtype": "TCP", 00:18:41.202 "adrfam": "IPv4", 00:18:41.202 "traddr": "10.0.0.2", 00:18:41.202 "trsvcid": "4420" 00:18:41.202 }, 00:18:41.202 "peer_address": { 00:18:41.202 "trtype": "TCP", 00:18:41.202 "adrfam": "IPv4", 00:18:41.202 "traddr": "10.0.0.1", 00:18:41.202 "trsvcid": "56412" 00:18:41.202 }, 00:18:41.202 "auth": { 00:18:41.202 "state": "completed", 00:18:41.202 "digest": "sha384", 00:18:41.202 "dhgroup": "ffdhe2048" 00:18:41.202 } 00:18:41.202 } 00:18:41.202 ]' 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.202 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.461 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.461 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.461 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.461 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.461 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.461 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:41.461 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.030 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.288 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.289 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.289 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.289 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.289 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.289 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.289 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.548 00:18:42.548 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.548 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.548 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.807 { 00:18:42.807 "cntlid": 61, 00:18:42.807 "qid": 0, 00:18:42.807 "state": "enabled", 00:18:42.807 "thread": "nvmf_tgt_poll_group_000", 00:18:42.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:42.807 "listen_address": { 00:18:42.807 "trtype": "TCP", 00:18:42.807 "adrfam": "IPv4", 00:18:42.807 "traddr": "10.0.0.2", 00:18:42.807 "trsvcid": "4420" 00:18:42.807 }, 00:18:42.807 "peer_address": { 00:18:42.807 "trtype": "TCP", 00:18:42.807 "adrfam": "IPv4", 00:18:42.807 "traddr": "10.0.0.1", 00:18:42.807 "trsvcid": "47670" 00:18:42.807 }, 00:18:42.807 "auth": { 00:18:42.807 "state": "completed", 00:18:42.807 "digest": "sha384", 00:18:42.807 "dhgroup": "ffdhe2048" 00:18:42.807 } 00:18:42.807 } 00:18:42.807 ]' 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.807 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.066 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:43.066 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.634 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.894 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.154 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.154 { 00:18:44.154 "cntlid": 63, 00:18:44.154 "qid": 0, 00:18:44.154 "state": "enabled", 00:18:44.154 "thread": "nvmf_tgt_poll_group_000", 00:18:44.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:44.154 "listen_address": { 00:18:44.154 "trtype": "TCP", 00:18:44.154 "adrfam": "IPv4", 00:18:44.154 "traddr": "10.0.0.2", 00:18:44.154 "trsvcid": "4420" 00:18:44.154 }, 00:18:44.154 "peer_address": { 00:18:44.154 "trtype": "TCP", 00:18:44.154 "adrfam": "IPv4", 00:18:44.154 "traddr": "10.0.0.1", 00:18:44.154 "trsvcid": "47688" 00:18:44.154 }, 00:18:44.154 "auth": { 00:18:44.154 "state": "completed", 00:18:44.154 "digest": "sha384", 00:18:44.154 "dhgroup": "ffdhe2048" 00:18:44.154 } 00:18:44.154 } 00:18:44.154 ]' 00:18:44.154 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.413 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.413 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.414 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.414 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.414 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.414 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.414 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.672 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:44.672 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.241 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.500 00:18:45.500 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.500 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.500 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.758 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.758 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.758 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.758 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.758 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.759 { 00:18:45.759 "cntlid": 65, 00:18:45.759 "qid": 0, 00:18:45.759 "state": "enabled", 00:18:45.759 "thread": "nvmf_tgt_poll_group_000", 00:18:45.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:45.759 "listen_address": { 00:18:45.759 "trtype": "TCP", 00:18:45.759 "adrfam": "IPv4", 00:18:45.759 "traddr": "10.0.0.2", 00:18:45.759 "trsvcid": "4420" 00:18:45.759 }, 00:18:45.759 "peer_address": { 00:18:45.759 "trtype": "TCP", 00:18:45.759 "adrfam": "IPv4", 00:18:45.759 "traddr": "10.0.0.1", 00:18:45.759 "trsvcid": "47722" 00:18:45.759 }, 00:18:45.759 "auth": { 00:18:45.759 "state": "completed", 00:18:45.759 "digest": "sha384", 00:18:45.759 "dhgroup": "ffdhe3072" 00:18:45.759 } 00:18:45.759 } 00:18:45.759 ]' 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.759 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.018 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:46.018 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.586 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.848 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.107 00:18:47.107 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.107 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.107 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.107 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.367 { 00:18:47.367 "cntlid": 67, 00:18:47.367 "qid": 0, 00:18:47.367 "state": "enabled", 00:18:47.367 "thread": "nvmf_tgt_poll_group_000", 00:18:47.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:47.367 "listen_address": { 00:18:47.367 "trtype": "TCP", 00:18:47.367 "adrfam": "IPv4", 00:18:47.367 "traddr": "10.0.0.2", 00:18:47.367 "trsvcid": "4420" 00:18:47.367 }, 00:18:47.367 "peer_address": { 00:18:47.367 "trtype": "TCP", 00:18:47.367 "adrfam": "IPv4", 00:18:47.367 "traddr": "10.0.0.1", 00:18:47.367 "trsvcid": "47740" 00:18:47.367 }, 00:18:47.367 "auth": { 00:18:47.367 "state": "completed", 00:18:47.367 "digest": "sha384", 00:18:47.367 "dhgroup": "ffdhe3072" 00:18:47.367 } 00:18:47.367 } 00:18:47.367 ]' 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.367 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.367 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.368 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.368 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.627 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:47.627 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.194 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.453 00:18:48.453 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.453 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.453 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.713 { 00:18:48.713 "cntlid": 69, 00:18:48.713 "qid": 0, 00:18:48.713 "state": "enabled", 00:18:48.713 "thread": "nvmf_tgt_poll_group_000", 00:18:48.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:48.713 "listen_address": { 00:18:48.713 "trtype": "TCP", 00:18:48.713 "adrfam": "IPv4", 00:18:48.713 "traddr": "10.0.0.2", 00:18:48.713 "trsvcid": "4420" 00:18:48.713 }, 00:18:48.713 "peer_address": { 00:18:48.713 "trtype": "TCP", 00:18:48.713 "adrfam": "IPv4", 00:18:48.713 "traddr": "10.0.0.1", 00:18:48.713 "trsvcid": "47778" 00:18:48.713 }, 00:18:48.713 "auth": { 00:18:48.713 "state": "completed", 00:18:48.713 "digest": "sha384", 00:18:48.713 "dhgroup": "ffdhe3072" 00:18:48.713 } 00:18:48.713 } 00:18:48.713 ]' 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.713 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.972 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:48.972 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:49.539 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.539 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:49.539 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.540 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.540 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.540 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.540 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.540 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.799 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.057 00:18:50.057 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.057 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.057 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.316 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.316 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.316 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.316 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.316 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.316 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.316 { 00:18:50.316 "cntlid": 71, 00:18:50.316 "qid": 0, 00:18:50.316 "state": "enabled", 00:18:50.316 "thread": "nvmf_tgt_poll_group_000", 00:18:50.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:50.316 "listen_address": { 00:18:50.316 "trtype": "TCP", 00:18:50.316 "adrfam": "IPv4", 00:18:50.316 "traddr": "10.0.0.2", 00:18:50.316 "trsvcid": "4420" 00:18:50.316 }, 00:18:50.316 "peer_address": { 00:18:50.316 "trtype": "TCP", 00:18:50.316 "adrfam": "IPv4", 00:18:50.316 "traddr": "10.0.0.1", 00:18:50.316 "trsvcid": "47798" 00:18:50.316 }, 00:18:50.316 "auth": { 00:18:50.316 "state": "completed", 00:18:50.316 "digest": "sha384", 00:18:50.317 "dhgroup": "ffdhe3072" 00:18:50.317 } 00:18:50.317 } 00:18:50.317 ]' 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.317 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.576 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:50.576 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.143 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.403 00:18:51.662 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.662 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.662 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.662 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.663 { 00:18:51.663 "cntlid": 73, 00:18:51.663 "qid": 0, 00:18:51.663 "state": "enabled", 00:18:51.663 "thread": "nvmf_tgt_poll_group_000", 00:18:51.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:51.663 "listen_address": { 00:18:51.663 "trtype": "TCP", 00:18:51.663 "adrfam": "IPv4", 00:18:51.663 "traddr": "10.0.0.2", 00:18:51.663 "trsvcid": "4420" 00:18:51.663 }, 00:18:51.663 "peer_address": { 00:18:51.663 "trtype": "TCP", 00:18:51.663 "adrfam": "IPv4", 00:18:51.663 "traddr": "10.0.0.1", 00:18:51.663 "trsvcid": "47830" 00:18:51.663 }, 00:18:51.663 "auth": { 00:18:51.663 "state": "completed", 00:18:51.663 "digest": "sha384", 00:18:51.663 "dhgroup": "ffdhe4096" 00:18:51.663 } 00:18:51.663 } 00:18:51.663 ]' 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.663 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.921 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.922 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.922 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.922 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.922 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.922 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:51.922 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.490 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.749 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:52.749 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.749 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.749 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.750 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.009 00:18:53.009 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.009 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.009 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.268 { 00:18:53.268 "cntlid": 75, 00:18:53.268 "qid": 0, 00:18:53.268 "state": "enabled", 00:18:53.268 "thread": "nvmf_tgt_poll_group_000", 00:18:53.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:53.268 "listen_address": { 00:18:53.268 "trtype": "TCP", 00:18:53.268 "adrfam": "IPv4", 00:18:53.268 "traddr": "10.0.0.2", 00:18:53.268 "trsvcid": "4420" 00:18:53.268 }, 00:18:53.268 "peer_address": { 00:18:53.268 "trtype": "TCP", 00:18:53.268 "adrfam": "IPv4", 00:18:53.268 "traddr": "10.0.0.1", 00:18:53.268 "trsvcid": "52214" 00:18:53.268 }, 00:18:53.268 "auth": { 00:18:53.268 "state": "completed", 00:18:53.268 "digest": "sha384", 00:18:53.268 "dhgroup": "ffdhe4096" 00:18:53.268 } 00:18:53.268 } 00:18:53.268 ]' 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.527 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:53.527 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.095 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.354 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.613 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.613 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.872 { 00:18:54.872 "cntlid": 77, 00:18:54.872 "qid": 0, 00:18:54.872 "state": "enabled", 00:18:54.872 "thread": "nvmf_tgt_poll_group_000", 00:18:54.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:54.872 "listen_address": { 00:18:54.872 "trtype": "TCP", 00:18:54.872 "adrfam": "IPv4", 00:18:54.872 "traddr": "10.0.0.2", 00:18:54.872 "trsvcid": "4420" 00:18:54.872 }, 00:18:54.872 "peer_address": { 00:18:54.872 "trtype": "TCP", 00:18:54.872 "adrfam": "IPv4", 00:18:54.872 "traddr": "10.0.0.1", 00:18:54.872 "trsvcid": "52232" 00:18:54.872 }, 00:18:54.872 "auth": { 00:18:54.872 "state": "completed", 00:18:54.872 "digest": "sha384", 00:18:54.872 "dhgroup": "ffdhe4096" 00:18:54.872 } 00:18:54.872 } 00:18:54.872 ]' 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.872 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.131 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:55.131 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.699 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.958 00:18:55.958 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.958 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.958 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.217 { 00:18:56.217 "cntlid": 79, 00:18:56.217 "qid": 0, 00:18:56.217 "state": "enabled", 00:18:56.217 "thread": "nvmf_tgt_poll_group_000", 00:18:56.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:56.217 "listen_address": { 00:18:56.217 "trtype": "TCP", 00:18:56.217 "adrfam": "IPv4", 00:18:56.217 "traddr": "10.0.0.2", 00:18:56.217 "trsvcid": "4420" 00:18:56.217 }, 00:18:56.217 "peer_address": { 00:18:56.217 "trtype": "TCP", 00:18:56.217 "adrfam": "IPv4", 00:18:56.217 "traddr": "10.0.0.1", 00:18:56.217 "trsvcid": "52258" 00:18:56.217 }, 00:18:56.217 "auth": { 00:18:56.217 "state": "completed", 00:18:56.217 "digest": "sha384", 00:18:56.217 "dhgroup": "ffdhe4096" 00:18:56.217 } 00:18:56.217 } 00:18:56.217 ]' 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.217 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.476 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.476 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.476 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.476 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:56.476 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.044 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.304 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.563 00:18:57.563 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.563 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.563 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.822 { 00:18:57.822 "cntlid": 81, 00:18:57.822 "qid": 0, 00:18:57.822 "state": "enabled", 00:18:57.822 "thread": "nvmf_tgt_poll_group_000", 00:18:57.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:57.822 "listen_address": { 00:18:57.822 "trtype": "TCP", 00:18:57.822 "adrfam": "IPv4", 00:18:57.822 "traddr": "10.0.0.2", 00:18:57.822 "trsvcid": "4420" 00:18:57.822 }, 00:18:57.822 "peer_address": { 00:18:57.822 "trtype": "TCP", 00:18:57.822 "adrfam": "IPv4", 00:18:57.822 "traddr": "10.0.0.1", 00:18:57.822 "trsvcid": "52300" 00:18:57.822 }, 00:18:57.822 "auth": { 00:18:57.822 "state": "completed", 00:18:57.822 "digest": "sha384", 00:18:57.822 "dhgroup": "ffdhe6144" 00:18:57.822 } 00:18:57.822 } 00:18:57.822 ]' 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.822 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.081 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.081 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.081 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.081 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:58.081 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.650 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.909 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.168 00:18:59.168 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.168 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.168 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.426 { 00:18:59.426 "cntlid": 83, 00:18:59.426 "qid": 0, 00:18:59.426 "state": "enabled", 00:18:59.426 "thread": "nvmf_tgt_poll_group_000", 00:18:59.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:18:59.426 "listen_address": { 00:18:59.426 "trtype": "TCP", 00:18:59.426 "adrfam": "IPv4", 00:18:59.426 "traddr": "10.0.0.2", 00:18:59.426 "trsvcid": "4420" 00:18:59.426 }, 00:18:59.426 "peer_address": { 00:18:59.426 "trtype": "TCP", 00:18:59.426 "adrfam": "IPv4", 00:18:59.426 "traddr": "10.0.0.1", 00:18:59.426 "trsvcid": "52322" 00:18:59.426 }, 00:18:59.426 "auth": { 00:18:59.426 "state": "completed", 00:18:59.426 "digest": "sha384", 00:18:59.426 "dhgroup": "ffdhe6144" 00:18:59.426 } 00:18:59.426 } 00:18:59.426 ]' 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.426 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.684 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:18:59.684 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.251 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.252 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.510 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.511 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.769 00:19:00.769 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.770 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.770 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.029 { 00:19:01.029 "cntlid": 85, 00:19:01.029 "qid": 0, 00:19:01.029 "state": "enabled", 00:19:01.029 "thread": "nvmf_tgt_poll_group_000", 00:19:01.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:01.029 "listen_address": { 00:19:01.029 "trtype": "TCP", 00:19:01.029 "adrfam": "IPv4", 00:19:01.029 "traddr": "10.0.0.2", 00:19:01.029 "trsvcid": "4420" 00:19:01.029 }, 00:19:01.029 "peer_address": { 00:19:01.029 "trtype": "TCP", 00:19:01.029 "adrfam": "IPv4", 00:19:01.029 "traddr": "10.0.0.1", 00:19:01.029 "trsvcid": "52344" 00:19:01.029 }, 00:19:01.029 "auth": { 00:19:01.029 "state": "completed", 00:19:01.029 "digest": "sha384", 00:19:01.029 "dhgroup": "ffdhe6144" 00:19:01.029 } 00:19:01.029 } 00:19:01.029 ]' 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.029 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.288 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:01.288 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.855 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.115 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.375 00:19:02.375 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.375 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.375 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.634 { 00:19:02.634 "cntlid": 87, 00:19:02.634 "qid": 0, 00:19:02.634 "state": "enabled", 00:19:02.634 "thread": "nvmf_tgt_poll_group_000", 00:19:02.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:02.634 "listen_address": { 00:19:02.634 "trtype": "TCP", 00:19:02.634 "adrfam": "IPv4", 00:19:02.634 "traddr": "10.0.0.2", 00:19:02.634 "trsvcid": "4420" 00:19:02.634 }, 00:19:02.634 "peer_address": { 00:19:02.634 "trtype": "TCP", 00:19:02.634 "adrfam": "IPv4", 00:19:02.634 "traddr": "10.0.0.1", 00:19:02.634 "trsvcid": "60744" 00:19:02.634 }, 00:19:02.634 "auth": { 00:19:02.634 "state": "completed", 00:19:02.634 "digest": "sha384", 00:19:02.634 "dhgroup": "ffdhe6144" 00:19:02.634 } 00:19:02.634 } 00:19:02.634 ]' 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.634 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.893 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:02.893 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:03.483 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.483 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.861 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.861 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.861 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.861 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.125 00:19:04.125 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.125 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.125 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.125 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.384 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.384 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.384 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.385 { 00:19:04.385 "cntlid": 89, 00:19:04.385 "qid": 0, 00:19:04.385 "state": "enabled", 00:19:04.385 "thread": "nvmf_tgt_poll_group_000", 00:19:04.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:04.385 "listen_address": { 00:19:04.385 "trtype": "TCP", 00:19:04.385 "adrfam": "IPv4", 00:19:04.385 "traddr": "10.0.0.2", 00:19:04.385 "trsvcid": "4420" 00:19:04.385 }, 00:19:04.385 "peer_address": { 00:19:04.385 "trtype": "TCP", 00:19:04.385 "adrfam": "IPv4", 00:19:04.385 "traddr": "10.0.0.1", 00:19:04.385 "trsvcid": "60784" 00:19:04.385 }, 00:19:04.385 "auth": { 00:19:04.385 "state": "completed", 00:19:04.385 "digest": "sha384", 00:19:04.385 "dhgroup": "ffdhe8192" 00:19:04.385 } 00:19:04.385 } 00:19:04.385 ]' 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.385 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.385 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.385 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.385 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.643 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:04.644 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.211 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.470 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.470 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.470 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.470 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.471 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.729 00:19:05.729 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.729 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.729 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.989 { 00:19:05.989 "cntlid": 91, 00:19:05.989 "qid": 0, 00:19:05.989 "state": "enabled", 00:19:05.989 "thread": "nvmf_tgt_poll_group_000", 00:19:05.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:05.989 "listen_address": { 00:19:05.989 "trtype": "TCP", 00:19:05.989 "adrfam": "IPv4", 00:19:05.989 "traddr": "10.0.0.2", 00:19:05.989 "trsvcid": "4420" 00:19:05.989 }, 00:19:05.989 "peer_address": { 00:19:05.989 "trtype": "TCP", 00:19:05.989 "adrfam": "IPv4", 00:19:05.989 "traddr": "10.0.0.1", 00:19:05.989 "trsvcid": "60812" 00:19:05.989 }, 00:19:05.989 "auth": { 00:19:05.989 "state": "completed", 00:19:05.989 "digest": "sha384", 00:19:05.989 "dhgroup": "ffdhe8192" 00:19:05.989 } 00:19:05.989 } 00:19:05.989 ]' 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.989 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.990 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.990 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.990 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.248 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.249 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.249 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.249 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:06.249 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:06.816 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.817 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.075 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.644 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.644 { 00:19:07.644 "cntlid": 93, 00:19:07.644 "qid": 0, 00:19:07.644 "state": "enabled", 00:19:07.644 "thread": "nvmf_tgt_poll_group_000", 00:19:07.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:07.644 "listen_address": { 00:19:07.644 "trtype": "TCP", 00:19:07.644 "adrfam": "IPv4", 00:19:07.644 "traddr": "10.0.0.2", 00:19:07.644 "trsvcid": "4420" 00:19:07.644 }, 00:19:07.644 "peer_address": { 00:19:07.644 "trtype": "TCP", 00:19:07.644 "adrfam": "IPv4", 00:19:07.644 "traddr": "10.0.0.1", 00:19:07.644 "trsvcid": "60834" 00:19:07.644 }, 00:19:07.644 "auth": { 00:19:07.644 "state": "completed", 00:19:07.644 "digest": "sha384", 00:19:07.644 "dhgroup": "ffdhe8192" 00:19:07.644 } 00:19:07.644 } 00:19:07.644 ]' 00:19:07.644 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.903 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.162 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:08.162 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.729 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.730 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.730 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.730 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.730 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.297 00:19:09.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.557 { 00:19:09.557 "cntlid": 95, 00:19:09.557 "qid": 0, 00:19:09.557 "state": "enabled", 00:19:09.557 "thread": "nvmf_tgt_poll_group_000", 00:19:09.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:09.557 "listen_address": { 00:19:09.557 "trtype": "TCP", 00:19:09.557 "adrfam": "IPv4", 00:19:09.557 "traddr": "10.0.0.2", 00:19:09.557 "trsvcid": "4420" 00:19:09.557 }, 00:19:09.557 "peer_address": { 00:19:09.557 "trtype": "TCP", 00:19:09.557 "adrfam": "IPv4", 00:19:09.557 "traddr": "10.0.0.1", 00:19:09.557 "trsvcid": "60868" 00:19:09.557 }, 00:19:09.557 "auth": { 00:19:09.557 "state": "completed", 00:19:09.557 "digest": "sha384", 00:19:09.557 "dhgroup": "ffdhe8192" 00:19:09.557 } 00:19:09.557 } 00:19:09.557 ]' 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.557 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.816 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:09.816 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.385 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.385 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.643 00:19:10.644 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.644 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.644 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.902 { 00:19:10.902 "cntlid": 97, 00:19:10.902 "qid": 0, 00:19:10.902 "state": "enabled", 00:19:10.902 "thread": "nvmf_tgt_poll_group_000", 00:19:10.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:10.902 "listen_address": { 00:19:10.902 "trtype": "TCP", 00:19:10.902 "adrfam": "IPv4", 00:19:10.902 "traddr": "10.0.0.2", 00:19:10.902 "trsvcid": "4420" 00:19:10.902 }, 00:19:10.902 "peer_address": { 00:19:10.902 "trtype": "TCP", 00:19:10.902 "adrfam": "IPv4", 00:19:10.902 "traddr": "10.0.0.1", 00:19:10.902 "trsvcid": "60886" 00:19:10.902 }, 00:19:10.902 "auth": { 00:19:10.902 "state": "completed", 00:19:10.902 "digest": "sha512", 00:19:10.902 "dhgroup": "null" 00:19:10.902 } 00:19:10.902 } 00:19:10.902 ]' 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.902 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.161 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.161 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.161 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.161 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:11.161 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.729 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.989 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.248 00:19:12.248 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.248 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.248 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.507 { 00:19:12.507 "cntlid": 99, 00:19:12.507 "qid": 0, 00:19:12.507 "state": "enabled", 00:19:12.507 "thread": "nvmf_tgt_poll_group_000", 00:19:12.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:12.507 "listen_address": { 00:19:12.507 "trtype": "TCP", 00:19:12.507 "adrfam": "IPv4", 00:19:12.507 "traddr": "10.0.0.2", 00:19:12.507 "trsvcid": "4420" 00:19:12.507 }, 00:19:12.507 "peer_address": { 00:19:12.507 "trtype": "TCP", 00:19:12.507 "adrfam": "IPv4", 00:19:12.507 "traddr": "10.0.0.1", 00:19:12.507 "trsvcid": "59036" 00:19:12.507 }, 00:19:12.507 "auth": { 00:19:12.507 "state": "completed", 00:19:12.507 "digest": "sha512", 00:19:12.507 "dhgroup": "null" 00:19:12.507 } 00:19:12.507 } 00:19:12.507 ]' 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.507 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.766 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:12.766 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.333 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.333 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.592 00:19:13.592 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.592 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.592 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.851 { 00:19:13.851 "cntlid": 101, 00:19:13.851 "qid": 0, 00:19:13.851 "state": "enabled", 00:19:13.851 "thread": "nvmf_tgt_poll_group_000", 00:19:13.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:13.851 "listen_address": { 00:19:13.851 "trtype": "TCP", 00:19:13.851 "adrfam": "IPv4", 00:19:13.851 "traddr": "10.0.0.2", 00:19:13.851 "trsvcid": "4420" 00:19:13.851 }, 00:19:13.851 "peer_address": { 00:19:13.851 "trtype": "TCP", 00:19:13.851 "adrfam": "IPv4", 00:19:13.851 "traddr": "10.0.0.1", 00:19:13.851 "trsvcid": "59056" 00:19:13.851 }, 00:19:13.851 "auth": { 00:19:13.851 "state": "completed", 00:19:13.851 "digest": "sha512", 00:19:13.851 "dhgroup": "null" 00:19:13.851 } 00:19:13.851 } 00:19:13.851 ]' 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.851 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.110 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:14.110 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.110 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.110 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.111 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.111 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:14.111 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.677 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.936 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.194 00:19:15.194 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.194 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.194 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.453 { 00:19:15.453 "cntlid": 103, 00:19:15.453 "qid": 0, 00:19:15.453 "state": "enabled", 00:19:15.453 "thread": "nvmf_tgt_poll_group_000", 00:19:15.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:15.453 "listen_address": { 00:19:15.453 "trtype": "TCP", 00:19:15.453 "adrfam": "IPv4", 00:19:15.453 "traddr": "10.0.0.2", 00:19:15.453 "trsvcid": "4420" 00:19:15.453 }, 00:19:15.453 "peer_address": { 00:19:15.453 "trtype": "TCP", 00:19:15.453 "adrfam": "IPv4", 00:19:15.453 "traddr": "10.0.0.1", 00:19:15.453 "trsvcid": "59074" 00:19:15.453 }, 00:19:15.453 "auth": { 00:19:15.453 "state": "completed", 00:19:15.453 "digest": "sha512", 00:19:15.453 "dhgroup": "null" 00:19:15.453 } 00:19:15.453 } 00:19:15.453 ]' 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.453 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.712 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:15.712 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:16.278 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.278 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:16.278 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.278 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.279 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.279 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.279 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.279 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.279 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.537 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.537 00:19:16.796 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.796 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.796 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.796 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.796 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.796 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.797 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.797 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.797 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.797 { 00:19:16.797 "cntlid": 105, 00:19:16.797 "qid": 0, 00:19:16.797 "state": "enabled", 00:19:16.797 "thread": "nvmf_tgt_poll_group_000", 00:19:16.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:16.797 "listen_address": { 00:19:16.797 "trtype": "TCP", 00:19:16.797 "adrfam": "IPv4", 00:19:16.797 "traddr": "10.0.0.2", 00:19:16.797 "trsvcid": "4420" 00:19:16.797 }, 00:19:16.797 "peer_address": { 00:19:16.797 "trtype": "TCP", 00:19:16.797 "adrfam": "IPv4", 00:19:16.797 "traddr": "10.0.0.1", 00:19:16.797 "trsvcid": "59108" 00:19:16.797 }, 00:19:16.797 "auth": { 00:19:16.797 "state": "completed", 00:19:16.797 "digest": "sha512", 00:19:16.797 "dhgroup": "ffdhe2048" 00:19:16.797 } 00:19:16.797 } 00:19:16.797 ]' 00:19:16.797 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.055 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.314 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:17.314 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.880 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.139 00:19:18.139 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.139 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.139 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.398 { 00:19:18.398 "cntlid": 107, 00:19:18.398 "qid": 0, 00:19:18.398 "state": "enabled", 00:19:18.398 "thread": "nvmf_tgt_poll_group_000", 00:19:18.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:18.398 "listen_address": { 00:19:18.398 "trtype": "TCP", 00:19:18.398 "adrfam": "IPv4", 00:19:18.398 "traddr": "10.0.0.2", 00:19:18.398 "trsvcid": "4420" 00:19:18.398 }, 00:19:18.398 "peer_address": { 00:19:18.398 "trtype": "TCP", 00:19:18.398 "adrfam": "IPv4", 00:19:18.398 "traddr": "10.0.0.1", 00:19:18.398 "trsvcid": "59132" 00:19:18.398 }, 00:19:18.398 "auth": { 00:19:18.398 "state": "completed", 00:19:18.398 "digest": "sha512", 00:19:18.398 "dhgroup": "ffdhe2048" 00:19:18.398 } 00:19:18.398 } 00:19:18.398 ]' 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.398 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.657 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.657 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.657 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.657 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:18.657 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.225 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.484 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.742 00:19:19.742 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.742 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.742 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.742 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.001 { 00:19:20.001 "cntlid": 109, 00:19:20.001 "qid": 0, 00:19:20.001 "state": "enabled", 00:19:20.001 "thread": "nvmf_tgt_poll_group_000", 00:19:20.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:20.001 "listen_address": { 00:19:20.001 "trtype": "TCP", 00:19:20.001 "adrfam": "IPv4", 00:19:20.001 "traddr": "10.0.0.2", 00:19:20.001 "trsvcid": "4420" 00:19:20.001 }, 00:19:20.001 "peer_address": { 00:19:20.001 "trtype": "TCP", 00:19:20.001 "adrfam": "IPv4", 00:19:20.001 "traddr": "10.0.0.1", 00:19:20.001 "trsvcid": "59160" 00:19:20.001 }, 00:19:20.001 "auth": { 00:19:20.001 "state": "completed", 00:19:20.001 "digest": "sha512", 00:19:20.001 "dhgroup": "ffdhe2048" 00:19:20.001 } 00:19:20.001 } 00:19:20.001 ]' 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.001 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.261 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:20.261 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.828 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.086 00:19:21.086 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.086 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.087 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.345 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.345 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.345 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.345 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.345 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.345 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.345 { 00:19:21.345 "cntlid": 111, 00:19:21.345 "qid": 0, 00:19:21.345 "state": "enabled", 00:19:21.345 "thread": "nvmf_tgt_poll_group_000", 00:19:21.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:21.345 "listen_address": { 00:19:21.345 "trtype": "TCP", 00:19:21.345 "adrfam": "IPv4", 00:19:21.345 "traddr": "10.0.0.2", 00:19:21.345 "trsvcid": "4420" 00:19:21.345 }, 00:19:21.345 "peer_address": { 00:19:21.345 "trtype": "TCP", 00:19:21.345 "adrfam": "IPv4", 00:19:21.345 "traddr": "10.0.0.1", 00:19:21.345 "trsvcid": "59192" 00:19:21.345 }, 00:19:21.345 "auth": { 00:19:21.345 "state": "completed", 00:19:21.345 "digest": "sha512", 00:19:21.345 "dhgroup": "ffdhe2048" 00:19:21.345 } 00:19:21.345 } 00:19:21.345 ]' 00:19:21.345 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.345 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.345 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.345 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.345 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.604 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.604 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.604 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.604 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:21.604 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.172 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.432 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.691 00:19:22.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.691 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.949 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.950 { 00:19:22.950 "cntlid": 113, 00:19:22.950 "qid": 0, 00:19:22.950 "state": "enabled", 00:19:22.950 "thread": "nvmf_tgt_poll_group_000", 00:19:22.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:22.950 "listen_address": { 00:19:22.950 "trtype": "TCP", 00:19:22.950 "adrfam": "IPv4", 00:19:22.950 "traddr": "10.0.0.2", 00:19:22.950 "trsvcid": "4420" 00:19:22.950 }, 00:19:22.950 "peer_address": { 00:19:22.950 "trtype": "TCP", 00:19:22.950 "adrfam": "IPv4", 00:19:22.950 "traddr": "10.0.0.1", 00:19:22.950 "trsvcid": "53674" 00:19:22.950 }, 00:19:22.950 "auth": { 00:19:22.950 "state": "completed", 00:19:22.950 "digest": "sha512", 00:19:22.950 "dhgroup": "ffdhe3072" 00:19:22.950 } 00:19:22.950 } 00:19:22.950 ]' 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.950 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.208 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:23.208 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.776 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.035 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.294 00:19:24.294 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.294 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.294 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.294 { 00:19:24.294 "cntlid": 115, 00:19:24.294 "qid": 0, 00:19:24.294 "state": "enabled", 00:19:24.294 "thread": "nvmf_tgt_poll_group_000", 00:19:24.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:24.294 "listen_address": { 00:19:24.294 "trtype": "TCP", 00:19:24.294 "adrfam": "IPv4", 00:19:24.294 "traddr": "10.0.0.2", 00:19:24.294 "trsvcid": "4420" 00:19:24.294 }, 00:19:24.294 "peer_address": { 00:19:24.294 "trtype": "TCP", 00:19:24.294 "adrfam": "IPv4", 00:19:24.294 "traddr": "10.0.0.1", 00:19:24.294 "trsvcid": "53716" 00:19:24.294 }, 00:19:24.294 "auth": { 00:19:24.294 "state": "completed", 00:19:24.294 "digest": "sha512", 00:19:24.294 "dhgroup": "ffdhe3072" 00:19:24.294 } 00:19:24.294 } 00:19:24.294 ]' 00:19:24.294 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.553 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.811 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:24.811 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.381 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.381 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.641 00:19:25.641 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.641 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.641 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.900 { 00:19:25.900 "cntlid": 117, 00:19:25.900 "qid": 0, 00:19:25.900 "state": "enabled", 00:19:25.900 "thread": "nvmf_tgt_poll_group_000", 00:19:25.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:25.900 "listen_address": { 00:19:25.900 "trtype": "TCP", 00:19:25.900 "adrfam": "IPv4", 00:19:25.900 "traddr": "10.0.0.2", 00:19:25.900 "trsvcid": "4420" 00:19:25.900 }, 00:19:25.900 "peer_address": { 00:19:25.900 "trtype": "TCP", 00:19:25.900 "adrfam": "IPv4", 00:19:25.900 "traddr": "10.0.0.1", 00:19:25.900 "trsvcid": "53752" 00:19:25.900 }, 00:19:25.900 "auth": { 00:19:25.900 "state": "completed", 00:19:25.900 "digest": "sha512", 00:19:25.900 "dhgroup": "ffdhe3072" 00:19:25.900 } 00:19:25.900 } 00:19:25.900 ]' 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.900 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.159 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:26.159 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.727 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.987 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.246 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.246 { 00:19:27.246 "cntlid": 119, 00:19:27.246 "qid": 0, 00:19:27.246 "state": "enabled", 00:19:27.246 "thread": "nvmf_tgt_poll_group_000", 00:19:27.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:27.246 "listen_address": { 00:19:27.246 "trtype": "TCP", 00:19:27.246 "adrfam": "IPv4", 00:19:27.246 "traddr": "10.0.0.2", 00:19:27.246 "trsvcid": "4420" 00:19:27.246 }, 00:19:27.246 "peer_address": { 00:19:27.246 "trtype": "TCP", 00:19:27.246 "adrfam": "IPv4", 00:19:27.246 "traddr": "10.0.0.1", 00:19:27.246 "trsvcid": "53788" 00:19:27.246 }, 00:19:27.246 "auth": { 00:19:27.246 "state": "completed", 00:19:27.246 "digest": "sha512", 00:19:27.246 "dhgroup": "ffdhe3072" 00:19:27.246 } 00:19:27.246 } 00:19:27.246 ]' 00:19:27.246 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.505 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.764 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:27.764 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.333 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.333 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.333 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.333 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.333 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.593 00:19:28.593 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.593 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.593 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.864 { 00:19:28.864 "cntlid": 121, 00:19:28.864 "qid": 0, 00:19:28.864 "state": "enabled", 00:19:28.864 "thread": "nvmf_tgt_poll_group_000", 00:19:28.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:28.864 "listen_address": { 00:19:28.864 "trtype": "TCP", 00:19:28.864 "adrfam": "IPv4", 00:19:28.864 "traddr": "10.0.0.2", 00:19:28.864 "trsvcid": "4420" 00:19:28.864 }, 00:19:28.864 "peer_address": { 00:19:28.864 "trtype": "TCP", 00:19:28.864 "adrfam": "IPv4", 00:19:28.864 "traddr": "10.0.0.1", 00:19:28.864 "trsvcid": "53814" 00:19:28.864 }, 00:19:28.864 "auth": { 00:19:28.864 "state": "completed", 00:19:28.864 "digest": "sha512", 00:19:28.864 "dhgroup": "ffdhe4096" 00:19:28.864 } 00:19:28.864 } 00:19:28.864 ]' 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.864 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.123 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:29.123 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.716 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.975 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.234 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.234 { 00:19:30.234 "cntlid": 123, 00:19:30.234 "qid": 0, 00:19:30.234 "state": "enabled", 00:19:30.234 "thread": "nvmf_tgt_poll_group_000", 00:19:30.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:30.234 "listen_address": { 00:19:30.234 "trtype": "TCP", 00:19:30.234 "adrfam": "IPv4", 00:19:30.234 "traddr": "10.0.0.2", 00:19:30.234 "trsvcid": "4420" 00:19:30.234 }, 00:19:30.234 "peer_address": { 00:19:30.234 "trtype": "TCP", 00:19:30.234 "adrfam": "IPv4", 00:19:30.234 "traddr": "10.0.0.1", 00:19:30.234 "trsvcid": "53842" 00:19:30.234 }, 00:19:30.234 "auth": { 00:19:30.234 "state": "completed", 00:19:30.234 "digest": "sha512", 00:19:30.234 "dhgroup": "ffdhe4096" 00:19:30.234 } 00:19:30.234 } 00:19:30.234 ]' 00:19:30.234 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.492 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.751 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:30.751 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.319 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.319 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.319 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.319 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.319 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.577 00:19:31.577 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.577 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.577 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.835 { 00:19:31.835 "cntlid": 125, 00:19:31.835 "qid": 0, 00:19:31.835 "state": "enabled", 00:19:31.835 "thread": "nvmf_tgt_poll_group_000", 00:19:31.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:31.835 "listen_address": { 00:19:31.835 "trtype": "TCP", 00:19:31.835 "adrfam": "IPv4", 00:19:31.835 "traddr": "10.0.0.2", 00:19:31.835 "trsvcid": "4420" 00:19:31.835 }, 00:19:31.835 "peer_address": { 00:19:31.835 "trtype": "TCP", 00:19:31.835 "adrfam": "IPv4", 00:19:31.835 "traddr": "10.0.0.1", 00:19:31.835 "trsvcid": "53876" 00:19:31.835 }, 00:19:31.835 "auth": { 00:19:31.835 "state": "completed", 00:19:31.835 "digest": "sha512", 00:19:31.835 "dhgroup": "ffdhe4096" 00:19:31.835 } 00:19:31.835 } 00:19:31.835 ]' 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.835 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.836 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.836 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.836 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.836 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.094 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:32.094 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.662 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.921 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.922 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.180 00:19:33.180 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.180 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.180 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.440 { 00:19:33.440 "cntlid": 127, 00:19:33.440 "qid": 0, 00:19:33.440 "state": "enabled", 00:19:33.440 "thread": "nvmf_tgt_poll_group_000", 00:19:33.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:33.440 "listen_address": { 00:19:33.440 "trtype": "TCP", 00:19:33.440 "adrfam": "IPv4", 00:19:33.440 "traddr": "10.0.0.2", 00:19:33.440 "trsvcid": "4420" 00:19:33.440 }, 00:19:33.440 "peer_address": { 00:19:33.440 "trtype": "TCP", 00:19:33.440 "adrfam": "IPv4", 00:19:33.440 "traddr": "10.0.0.1", 00:19:33.440 "trsvcid": "54828" 00:19:33.440 }, 00:19:33.440 "auth": { 00:19:33.440 "state": "completed", 00:19:33.440 "digest": "sha512", 00:19:33.440 "dhgroup": "ffdhe4096" 00:19:33.440 } 00:19:33.440 } 00:19:33.440 ]' 00:19:33.440 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.440 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.699 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:33.699 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.268 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.528 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.787 00:19:34.787 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.787 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.787 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.046 { 00:19:35.046 "cntlid": 129, 00:19:35.046 "qid": 0, 00:19:35.046 "state": "enabled", 00:19:35.046 "thread": "nvmf_tgt_poll_group_000", 00:19:35.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:35.046 "listen_address": { 00:19:35.046 "trtype": "TCP", 00:19:35.046 "adrfam": "IPv4", 00:19:35.046 "traddr": "10.0.0.2", 00:19:35.046 "trsvcid": "4420" 00:19:35.046 }, 00:19:35.046 "peer_address": { 00:19:35.046 "trtype": "TCP", 00:19:35.046 "adrfam": "IPv4", 00:19:35.046 "traddr": "10.0.0.1", 00:19:35.046 "trsvcid": "54858" 00:19:35.046 }, 00:19:35.046 "auth": { 00:19:35.046 "state": "completed", 00:19:35.046 "digest": "sha512", 00:19:35.046 "dhgroup": "ffdhe6144" 00:19:35.046 } 00:19:35.046 } 00:19:35.046 ]' 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.046 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.047 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.047 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.047 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.047 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.047 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.047 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.306 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:35.306 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.875 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.444 00:19:36.444 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.444 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.444 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.444 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.444 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.444 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.444 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.444 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.444 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.444 { 00:19:36.444 "cntlid": 131, 00:19:36.444 "qid": 0, 00:19:36.444 "state": "enabled", 00:19:36.444 "thread": "nvmf_tgt_poll_group_000", 00:19:36.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:36.444 "listen_address": { 00:19:36.444 "trtype": "TCP", 00:19:36.444 "adrfam": "IPv4", 00:19:36.444 "traddr": "10.0.0.2", 00:19:36.444 "trsvcid": "4420" 00:19:36.444 }, 00:19:36.444 "peer_address": { 00:19:36.444 "trtype": "TCP", 00:19:36.444 "adrfam": "IPv4", 00:19:36.445 "traddr": "10.0.0.1", 00:19:36.445 "trsvcid": "54892" 00:19:36.445 }, 00:19:36.445 "auth": { 00:19:36.445 "state": "completed", 00:19:36.445 "digest": "sha512", 00:19:36.445 "dhgroup": "ffdhe6144" 00:19:36.445 } 00:19:36.445 } 00:19:36.445 ]' 00:19:36.445 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.445 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.445 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.705 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.705 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.705 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.705 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.705 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.963 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:36.963 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:37.529 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.529 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.530 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.788 00:19:37.788 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.788 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.788 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.046 { 00:19:38.046 "cntlid": 133, 00:19:38.046 "qid": 0, 00:19:38.046 "state": "enabled", 00:19:38.046 "thread": "nvmf_tgt_poll_group_000", 00:19:38.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:38.046 "listen_address": { 00:19:38.046 "trtype": "TCP", 00:19:38.046 "adrfam": "IPv4", 00:19:38.046 "traddr": "10.0.0.2", 00:19:38.046 "trsvcid": "4420" 00:19:38.046 }, 00:19:38.046 "peer_address": { 00:19:38.046 "trtype": "TCP", 00:19:38.046 "adrfam": "IPv4", 00:19:38.046 "traddr": "10.0.0.1", 00:19:38.046 "trsvcid": "54920" 00:19:38.046 }, 00:19:38.046 "auth": { 00:19:38.046 "state": "completed", 00:19:38.046 "digest": "sha512", 00:19:38.046 "dhgroup": "ffdhe6144" 00:19:38.046 } 00:19:38.046 } 00:19:38.046 ]' 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.046 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.305 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.305 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.305 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.305 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:38.305 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.874 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.133 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.393 00:19:39.393 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.393 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.393 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.652 { 00:19:39.652 "cntlid": 135, 00:19:39.652 "qid": 0, 00:19:39.652 "state": "enabled", 00:19:39.652 "thread": "nvmf_tgt_poll_group_000", 00:19:39.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:39.652 "listen_address": { 00:19:39.652 "trtype": "TCP", 00:19:39.652 "adrfam": "IPv4", 00:19:39.652 "traddr": "10.0.0.2", 00:19:39.652 "trsvcid": "4420" 00:19:39.652 }, 00:19:39.652 "peer_address": { 00:19:39.652 "trtype": "TCP", 00:19:39.652 "adrfam": "IPv4", 00:19:39.652 "traddr": "10.0.0.1", 00:19:39.652 "trsvcid": "54956" 00:19:39.652 }, 00:19:39.652 "auth": { 00:19:39.652 "state": "completed", 00:19:39.652 "digest": "sha512", 00:19:39.652 "dhgroup": "ffdhe6144" 00:19:39.652 } 00:19:39.652 } 00:19:39.652 ]' 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.652 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.912 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:39.912 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:40.478 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.478 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:40.478 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.478 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.479 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.479 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.479 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.479 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.479 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.738 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.360 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.360 { 00:19:41.360 "cntlid": 137, 00:19:41.360 "qid": 0, 00:19:41.360 "state": "enabled", 00:19:41.360 "thread": "nvmf_tgt_poll_group_000", 00:19:41.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:41.360 "listen_address": { 00:19:41.360 "trtype": "TCP", 00:19:41.360 "adrfam": "IPv4", 00:19:41.360 "traddr": "10.0.0.2", 00:19:41.360 "trsvcid": "4420" 00:19:41.360 }, 00:19:41.360 "peer_address": { 00:19:41.360 "trtype": "TCP", 00:19:41.360 "adrfam": "IPv4", 00:19:41.360 "traddr": "10.0.0.1", 00:19:41.360 "trsvcid": "54976" 00:19:41.360 }, 00:19:41.360 "auth": { 00:19:41.360 "state": "completed", 00:19:41.360 "digest": "sha512", 00:19:41.360 "dhgroup": "ffdhe8192" 00:19:41.360 } 00:19:41.360 } 00:19:41.360 ]' 00:19:41.360 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.360 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.640 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:41.640 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.245 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.245 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.245 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.245 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.245 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.245 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.812 00:19:42.812 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.812 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.812 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.071 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.071 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.071 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.071 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.071 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.071 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.071 { 00:19:43.071 "cntlid": 139, 00:19:43.071 "qid": 0, 00:19:43.071 "state": "enabled", 00:19:43.071 "thread": "nvmf_tgt_poll_group_000", 00:19:43.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:43.071 "listen_address": { 00:19:43.071 "trtype": "TCP", 00:19:43.071 "adrfam": "IPv4", 00:19:43.071 "traddr": "10.0.0.2", 00:19:43.071 "trsvcid": "4420" 00:19:43.071 }, 00:19:43.071 "peer_address": { 00:19:43.071 "trtype": "TCP", 00:19:43.072 "adrfam": "IPv4", 00:19:43.072 "traddr": "10.0.0.1", 00:19:43.072 "trsvcid": "54350" 00:19:43.072 }, 00:19:43.072 "auth": { 00:19:43.072 "state": "completed", 00:19:43.072 "digest": "sha512", 00:19:43.072 "dhgroup": "ffdhe8192" 00:19:43.072 } 00:19:43.072 } 00:19:43.072 ]' 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.072 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.331 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:43.331 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: --dhchap-ctrl-secret DHHC-1:02:Yjg1MWVjZGMzMDJhM2UwNDc3ZWViYTYyNWZhODk4NzRlMGFlYzczNTFjNjc1N2JhgDMq7w==: 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.903 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.160 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.418 00:19:44.418 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.418 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.418 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.676 { 00:19:44.676 "cntlid": 141, 00:19:44.676 "qid": 0, 00:19:44.676 "state": "enabled", 00:19:44.676 "thread": "nvmf_tgt_poll_group_000", 00:19:44.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:44.676 "listen_address": { 00:19:44.676 "trtype": "TCP", 00:19:44.676 "adrfam": "IPv4", 00:19:44.676 "traddr": "10.0.0.2", 00:19:44.676 "trsvcid": "4420" 00:19:44.676 }, 00:19:44.676 "peer_address": { 00:19:44.676 "trtype": "TCP", 00:19:44.676 "adrfam": "IPv4", 00:19:44.676 "traddr": "10.0.0.1", 00:19:44.676 "trsvcid": "54388" 00:19:44.676 }, 00:19:44.676 "auth": { 00:19:44.676 "state": "completed", 00:19:44.676 "digest": "sha512", 00:19:44.676 "dhgroup": "ffdhe8192" 00:19:44.676 } 00:19:44.676 } 00:19:44.676 ]' 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.676 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.935 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.935 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.935 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.935 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:44.935 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:01:ODFhYmQ3MGM4MmUwOWIyNjA2ZGYzMGJjMWUyYWRkOWS66bZv: 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.531 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.790 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.048 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.308 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.308 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.308 { 00:19:46.308 "cntlid": 143, 00:19:46.308 "qid": 0, 00:19:46.308 "state": "enabled", 00:19:46.308 "thread": "nvmf_tgt_poll_group_000", 00:19:46.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:46.308 "listen_address": { 00:19:46.308 "trtype": "TCP", 00:19:46.308 "adrfam": "IPv4", 00:19:46.308 "traddr": "10.0.0.2", 00:19:46.308 "trsvcid": "4420" 00:19:46.308 }, 00:19:46.308 "peer_address": { 00:19:46.308 "trtype": "TCP", 00:19:46.308 "adrfam": "IPv4", 00:19:46.308 "traddr": "10.0.0.1", 00:19:46.308 "trsvcid": "54424" 00:19:46.308 }, 00:19:46.308 "auth": { 00:19:46.308 "state": "completed", 00:19:46.308 "digest": "sha512", 00:19:46.308 "dhgroup": "ffdhe8192" 00:19:46.308 } 00:19:46.308 } 00:19:46.308 ]' 00:19:46.308 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.308 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.308 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:46.567 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.134 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.392 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.959 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.959 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.959 { 00:19:47.959 "cntlid": 145, 00:19:47.959 "qid": 0, 00:19:47.959 "state": "enabled", 00:19:47.959 "thread": "nvmf_tgt_poll_group_000", 00:19:47.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:47.959 "listen_address": { 00:19:47.959 "trtype": "TCP", 00:19:47.959 "adrfam": "IPv4", 00:19:47.959 "traddr": "10.0.0.2", 00:19:47.959 "trsvcid": "4420" 00:19:47.959 }, 00:19:47.959 "peer_address": { 00:19:47.959 "trtype": "TCP", 00:19:47.959 "adrfam": "IPv4", 00:19:47.959 "traddr": "10.0.0.1", 00:19:47.959 "trsvcid": "54456" 00:19:47.959 }, 00:19:47.960 "auth": { 00:19:47.960 "state": "completed", 00:19:47.960 "digest": "sha512", 00:19:47.960 "dhgroup": "ffdhe8192" 00:19:47.960 } 00:19:47.960 } 00:19:47.960 ]' 00:19:47.960 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.477 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:48.477 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUxMTY2NzEwZmJkOTZiMWI0ZTM2M2EzNDgzYTkzNDA0MDY2NjI1YWFhNDQ4Yjc1zR/U5Q==: --dhchap-ctrl-secret DHHC-1:03:ZmU3NWIxMTY2MzgxNjI5ZDMzYmE5ZGI2ZTcyMTA4ZjEyNDljNjU4M2M3M2QxN2NmMjk5MTQ4OGMzM2M0YTNkOXQNnGc=: 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:49.045 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:49.305 request: 00:19:49.305 { 00:19:49.305 "name": "nvme0", 00:19:49.305 "trtype": "tcp", 00:19:49.305 "traddr": "10.0.0.2", 00:19:49.305 "adrfam": "ipv4", 00:19:49.305 "trsvcid": "4420", 00:19:49.305 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:49.305 "prchk_reftag": false, 00:19:49.305 "prchk_guard": false, 00:19:49.305 "hdgst": false, 00:19:49.305 "ddgst": false, 00:19:49.305 "dhchap_key": "key2", 00:19:49.305 "allow_unrecognized_csi": false, 00:19:49.305 "method": "bdev_nvme_attach_controller", 00:19:49.305 "req_id": 1 00:19:49.305 } 00:19:49.305 Got JSON-RPC error response 00:19:49.305 response: 00:19:49.305 { 00:19:49.305 "code": -5, 00:19:49.305 "message": "Input/output error" 00:19:49.305 } 00:19:49.305 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:49.305 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.305 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.305 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.305 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.305 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.873 request: 00:19:49.873 { 00:19:49.873 "name": "nvme0", 00:19:49.873 "trtype": "tcp", 00:19:49.873 "traddr": "10.0.0.2", 00:19:49.873 "adrfam": "ipv4", 00:19:49.873 "trsvcid": "4420", 00:19:49.873 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:49.873 "prchk_reftag": false, 00:19:49.873 "prchk_guard": false, 00:19:49.873 "hdgst": false, 00:19:49.873 "ddgst": false, 00:19:49.873 "dhchap_key": "key1", 00:19:49.873 "dhchap_ctrlr_key": "ckey2", 00:19:49.873 "allow_unrecognized_csi": false, 00:19:49.873 "method": "bdev_nvme_attach_controller", 00:19:49.873 "req_id": 1 00:19:49.873 } 00:19:49.873 Got JSON-RPC error response 00:19:49.873 response: 00:19:49.873 { 00:19:49.873 "code": -5, 00:19:49.873 "message": "Input/output error" 00:19:49.873 } 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.873 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.131 request: 00:19:50.131 { 00:19:50.131 "name": "nvme0", 00:19:50.131 "trtype": "tcp", 00:19:50.131 "traddr": "10.0.0.2", 00:19:50.131 "adrfam": "ipv4", 00:19:50.131 "trsvcid": "4420", 00:19:50.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:50.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:50.131 "prchk_reftag": false, 00:19:50.131 "prchk_guard": false, 00:19:50.131 "hdgst": false, 00:19:50.131 "ddgst": false, 00:19:50.131 "dhchap_key": "key1", 00:19:50.131 "dhchap_ctrlr_key": "ckey1", 00:19:50.131 "allow_unrecognized_csi": false, 00:19:50.131 "method": "bdev_nvme_attach_controller", 00:19:50.131 "req_id": 1 00:19:50.131 } 00:19:50.131 Got JSON-RPC error response 00:19:50.131 response: 00:19:50.131 { 00:19:50.131 "code": -5, 00:19:50.131 "message": "Input/output error" 00:19:50.131 } 00:19:50.389 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:50.389 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.389 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.389 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.389 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:50.389 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 911499 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 911499 ']' 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 911499 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 911499 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 911499' 00:19:50.390 killing process with pid 911499 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 911499 00:19:50.390 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 911499 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=934042 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 934042 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 934042 ']' 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.390 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 934042 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 934042 ']' 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.649 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.908 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.908 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:50.908 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:50.908 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.908 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.908 null0 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qxN 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ur0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ur0 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6Qc 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.T3N ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T3N 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6GS 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Lp2 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lp2 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4r2 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.167 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.168 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.735 nvme0n1 00:19:51.735 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.735 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.735 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.994 { 00:19:51.994 "cntlid": 1, 00:19:51.994 "qid": 0, 00:19:51.994 "state": "enabled", 00:19:51.994 "thread": "nvmf_tgt_poll_group_000", 00:19:51.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:51.994 "listen_address": { 00:19:51.994 "trtype": "TCP", 00:19:51.994 "adrfam": "IPv4", 00:19:51.994 "traddr": "10.0.0.2", 00:19:51.994 "trsvcid": "4420" 00:19:51.994 }, 00:19:51.994 "peer_address": { 00:19:51.994 "trtype": "TCP", 00:19:51.994 "adrfam": "IPv4", 00:19:51.994 "traddr": "10.0.0.1", 00:19:51.994 "trsvcid": "54510" 00:19:51.994 }, 00:19:51.994 "auth": { 00:19:51.994 "state": "completed", 00:19:51.994 "digest": "sha512", 00:19:51.994 "dhgroup": "ffdhe8192" 00:19:51.994 } 00:19:51.994 } 00:19:51.994 ]' 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.994 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.253 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.253 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.253 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.253 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:52.253 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key3 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:52.821 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.080 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.339 request: 00:19:53.339 { 00:19:53.339 "name": "nvme0", 00:19:53.339 "trtype": "tcp", 00:19:53.339 "traddr": "10.0.0.2", 00:19:53.339 "adrfam": "ipv4", 00:19:53.339 "trsvcid": "4420", 00:19:53.339 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:53.339 "prchk_reftag": false, 00:19:53.339 "prchk_guard": false, 00:19:53.339 "hdgst": false, 00:19:53.339 "ddgst": false, 00:19:53.339 "dhchap_key": "key3", 00:19:53.339 "allow_unrecognized_csi": false, 00:19:53.339 "method": "bdev_nvme_attach_controller", 00:19:53.339 "req_id": 1 00:19:53.339 } 00:19:53.339 Got JSON-RPC error response 00:19:53.339 response: 00:19:53.339 { 00:19:53.339 "code": -5, 00:19:53.339 "message": "Input/output error" 00:19:53.339 } 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:53.339 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:53.340 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:53.598 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:53.598 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:53.598 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.599 request: 00:19:53.599 { 00:19:53.599 "name": "nvme0", 00:19:53.599 "trtype": "tcp", 00:19:53.599 "traddr": "10.0.0.2", 00:19:53.599 "adrfam": "ipv4", 00:19:53.599 "trsvcid": "4420", 00:19:53.599 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:53.599 "prchk_reftag": false, 00:19:53.599 "prchk_guard": false, 00:19:53.599 "hdgst": false, 00:19:53.599 "ddgst": false, 00:19:53.599 "dhchap_key": "key3", 00:19:53.599 "allow_unrecognized_csi": false, 00:19:53.599 "method": "bdev_nvme_attach_controller", 00:19:53.599 "req_id": 1 00:19:53.599 } 00:19:53.599 Got JSON-RPC error response 00:19:53.599 response: 00:19:53.599 { 00:19:53.599 "code": -5, 00:19:53.599 "message": "Input/output error" 00:19:53.599 } 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.599 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.857 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:53.857 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.857 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.857 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:53.857 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.858 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:54.118 request: 00:19:54.118 { 00:19:54.118 "name": "nvme0", 00:19:54.118 "trtype": "tcp", 00:19:54.118 "traddr": "10.0.0.2", 00:19:54.118 "adrfam": "ipv4", 00:19:54.118 "trsvcid": "4420", 00:19:54.118 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:54.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:54.118 "prchk_reftag": false, 00:19:54.118 "prchk_guard": false, 00:19:54.118 "hdgst": false, 00:19:54.118 "ddgst": false, 00:19:54.118 "dhchap_key": "key0", 00:19:54.118 "dhchap_ctrlr_key": "key1", 00:19:54.118 "allow_unrecognized_csi": false, 00:19:54.118 "method": "bdev_nvme_attach_controller", 00:19:54.118 "req_id": 1 00:19:54.118 } 00:19:54.118 Got JSON-RPC error response 00:19:54.118 response: 00:19:54.118 { 00:19:54.118 "code": -5, 00:19:54.118 "message": "Input/output error" 00:19:54.118 } 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:54.118 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:54.377 nvme0n1 00:19:54.377 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:54.377 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:54.377 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.636 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.636 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.636 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:54.895 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:55.463 nvme0n1 00:19:55.463 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:55.463 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:55.463 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:55.722 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.982 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.982 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:55.982 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid 005363bc-ad7e-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: --dhchap-ctrl-secret DHHC-1:03:MGNiODJkMzRmNWU2OGM1NmI3YzUyNWYyNmY4MTk4OTgyYjhmOWIzNmNjNGE1ODc3NWFmNzRmMGFjMDJlNzgyYuCp6NA=: 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:56.551 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:56.552 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:57.121 request: 00:19:57.121 { 00:19:57.121 "name": "nvme0", 00:19:57.121 "trtype": "tcp", 00:19:57.121 "traddr": "10.0.0.2", 00:19:57.121 "adrfam": "ipv4", 00:19:57.121 "trsvcid": "4420", 00:19:57.121 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562", 00:19:57.121 "prchk_reftag": false, 00:19:57.121 "prchk_guard": false, 00:19:57.121 "hdgst": false, 00:19:57.121 "ddgst": false, 00:19:57.121 "dhchap_key": "key1", 00:19:57.121 "allow_unrecognized_csi": false, 00:19:57.121 "method": "bdev_nvme_attach_controller", 00:19:57.121 "req_id": 1 00:19:57.121 } 00:19:57.121 Got JSON-RPC error response 00:19:57.121 response: 00:19:57.121 { 00:19:57.121 "code": -5, 00:19:57.121 "message": "Input/output error" 00:19:57.121 } 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:57.121 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:57.690 nvme0n1 00:19:57.690 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:57.690 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:57.690 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.949 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.949 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.949 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:58.208 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:58.467 nvme0n1 00:19:58.467 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:58.467 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.467 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:58.467 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.467 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.468 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: '' 2s 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: ]] 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWViMmJkNGY0OTEwZDJlZWYwOWE2MjA1OGU5YjA5OGN9LONC: 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:58.727 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: 2s 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: ]] 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTIzN2M0NGM5NDhlNWJlN2VjMzE5NDQ0ZTM4ZWNkYThkNDRlYWU2MTAwZjJjYjJlWCBwQQ==: 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:01.262 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:03.167 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:03.167 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:03.167 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:03.168 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:03.736 nvme0n1 00:20:03.736 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:03.736 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.736 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.736 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.736 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:03.736 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:03.995 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:03.995 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:03.995 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:04.253 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:04.512 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:05.080 request: 00:20:05.080 { 00:20:05.080 "name": "nvme0", 00:20:05.080 "dhchap_key": "key1", 00:20:05.080 "dhchap_ctrlr_key": "key3", 00:20:05.080 "method": "bdev_nvme_set_keys", 00:20:05.080 "req_id": 1 00:20:05.080 } 00:20:05.080 Got JSON-RPC error response 00:20:05.080 response: 00:20:05.080 { 00:20:05.080 "code": -13, 00:20:05.080 "message": "Permission denied" 00:20:05.080 } 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:05.080 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.339 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:05.339 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:06.277 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:06.277 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:06.277 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:06.536 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:07.104 nvme0n1 00:20:07.104 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:07.104 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.104 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.104 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:07.105 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:07.674 request: 00:20:07.674 { 00:20:07.674 "name": "nvme0", 00:20:07.674 "dhchap_key": "key2", 00:20:07.674 "dhchap_ctrlr_key": "key0", 00:20:07.674 "method": "bdev_nvme_set_keys", 00:20:07.674 "req_id": 1 00:20:07.674 } 00:20:07.674 Got JSON-RPC error response 00:20:07.674 response: 00:20:07.674 { 00:20:07.674 "code": -13, 00:20:07.674 "message": "Permission denied" 00:20:07.674 } 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:07.674 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 911774 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 911774 ']' 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 911774 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 911774 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 911774' 00:20:09.052 killing process with pid 911774 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 911774 00:20:09.052 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 911774 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.311 rmmod nvme_tcp 00:20:09.311 rmmod nvme_fabrics 00:20:09.311 rmmod nvme_keyring 00:20:09.311 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 934042 ']' 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 934042 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 934042 ']' 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 934042 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 934042 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 934042' 00:20:09.311 killing process with pid 934042 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 934042 00:20:09.311 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 934042 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.571 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qxN /tmp/spdk.key-sha256.6Qc /tmp/spdk.key-sha384.6GS /tmp/spdk.key-sha512.4r2 /tmp/spdk.key-sha512.ur0 /tmp/spdk.key-sha384.T3N /tmp/spdk.key-sha256.Lp2 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:12.106 00:20:12.106 real 2m24.589s 00:20:12.106 user 5m30.647s 00:20:12.106 sys 0m23.601s 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.106 ************************************ 00:20:12.106 END TEST nvmf_auth_target 00:20:12.106 ************************************ 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.106 12:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.106 ************************************ 00:20:12.106 START TEST nvmf_bdevio_no_huge 00:20:12.106 ************************************ 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:12.107 * Looking for test storage... 00:20:12.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:12.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.107 --rc genhtml_branch_coverage=1 00:20:12.107 --rc genhtml_function_coverage=1 00:20:12.107 --rc genhtml_legend=1 00:20:12.107 --rc geninfo_all_blocks=1 00:20:12.107 --rc geninfo_unexecuted_blocks=1 00:20:12.107 00:20:12.107 ' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:12.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.107 --rc genhtml_branch_coverage=1 00:20:12.107 --rc genhtml_function_coverage=1 00:20:12.107 --rc genhtml_legend=1 00:20:12.107 --rc geninfo_all_blocks=1 00:20:12.107 --rc geninfo_unexecuted_blocks=1 00:20:12.107 00:20:12.107 ' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:12.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.107 --rc genhtml_branch_coverage=1 00:20:12.107 --rc genhtml_function_coverage=1 00:20:12.107 --rc genhtml_legend=1 00:20:12.107 --rc geninfo_all_blocks=1 00:20:12.107 --rc geninfo_unexecuted_blocks=1 00:20:12.107 00:20:12.107 ' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:12.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.107 --rc genhtml_branch_coverage=1 00:20:12.107 --rc genhtml_function_coverage=1 00:20:12.107 --rc genhtml_legend=1 00:20:12.107 --rc geninfo_all_blocks=1 00:20:12.107 --rc geninfo_unexecuted_blocks=1 00:20:12.107 00:20:12.107 ' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.107 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.108 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:20:18.680 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.680 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:20:18.680 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:20:18.681 Found net devices under 0000:1a:00.0: cvl_0_0 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:20:18.681 Found net devices under 0000:1a:00.1: cvl_0_1 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:18.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:20:18.681 00:20:18.681 --- 10.0.0.2 ping statistics --- 00:20:18.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.681 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:20:18.681 00:20:18.681 --- 10.0.0.1 ping statistics --- 00:20:18.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.681 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=941235 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 941235 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 941235 ']' 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.681 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.681 [2024-11-20 12:34:23.781498] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:18.681 [2024-11-20 12:34:23.781543] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:18.681 [2024-11-20 12:34:23.864723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.681 [2024-11-20 12:34:23.908711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.681 [2024-11-20 12:34:23.908746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.681 [2024-11-20 12:34:23.908754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.681 [2024-11-20 12:34:23.908760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.681 [2024-11-20 12:34:23.908765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.681 [2024-11-20 12:34:23.910012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:18.681 [2024-11-20 12:34:23.910124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:18.681 [2024-11-20 12:34:23.910236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.681 [2024-11-20 12:34:23.910238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 [2024-11-20 12:34:24.642821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 Malloc0 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 [2024-11-20 12:34:24.687099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.941 { 00:20:18.941 "params": { 00:20:18.941 "name": "Nvme$subsystem", 00:20:18.941 "trtype": "$TEST_TRANSPORT", 00:20:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.941 "adrfam": "ipv4", 00:20:18.941 "trsvcid": "$NVMF_PORT", 00:20:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.941 "hdgst": ${hdgst:-false}, 00:20:18.941 "ddgst": ${ddgst:-false} 00:20:18.941 }, 00:20:18.941 "method": "bdev_nvme_attach_controller" 00:20:18.941 } 00:20:18.941 EOF 00:20:18.941 )") 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:18.941 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:19.201 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:19.201 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:19.201 "params": { 00:20:19.201 "name": "Nvme1", 00:20:19.201 "trtype": "tcp", 00:20:19.201 "traddr": "10.0.0.2", 00:20:19.201 "adrfam": "ipv4", 00:20:19.201 "trsvcid": "4420", 00:20:19.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.201 "hdgst": false, 00:20:19.201 "ddgst": false 00:20:19.201 }, 00:20:19.201 "method": "bdev_nvme_attach_controller" 00:20:19.201 }' 00:20:19.201 [2024-11-20 12:34:24.736082] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:19.201 [2024-11-20 12:34:24.736126] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid941396 ] 00:20:19.201 [2024-11-20 12:34:24.814493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.201 [2024-11-20 12:34:24.859477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.201 [2024-11-20 12:34:24.859587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.201 [2024-11-20 12:34:24.859588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.460 I/O targets: 00:20:19.460 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:19.460 00:20:19.460 00:20:19.460 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.460 http://cunit.sourceforge.net/ 00:20:19.460 00:20:19.460 00:20:19.460 Suite: bdevio tests on: Nvme1n1 00:20:19.460 Test: blockdev write read block ...passed 00:20:19.460 Test: blockdev write zeroes read block ...passed 00:20:19.460 Test: blockdev write zeroes read no split ...passed 00:20:19.460 Test: blockdev write zeroes read split ...passed 00:20:19.719 Test: blockdev write zeroes read split partial ...passed 00:20:19.719 Test: blockdev reset ...[2024-11-20 12:34:25.261109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:19.719 [2024-11-20 12:34:25.261171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc82ca0 (9): Bad file descriptor 00:20:19.719 [2024-11-20 12:34:25.275208] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:19.719 passed 00:20:19.719 Test: blockdev write read 8 blocks ...passed 00:20:19.719 Test: blockdev write read size > 128k ...passed 00:20:19.719 Test: blockdev write read invalid size ...passed 00:20:19.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.719 Test: blockdev write read max offset ...passed 00:20:19.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.719 Test: blockdev writev readv 8 blocks ...passed 00:20:19.979 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.979 Test: blockdev writev readv block ...passed 00:20:19.979 Test: blockdev writev readv size > 128k ...passed 00:20:19.979 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.979 Test: blockdev comparev and writev ...[2024-11-20 12:34:25.527874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.527914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.528129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.528140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.528150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.528156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.528366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.528376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.528387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.528614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.528624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.528635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.979 [2024-11-20 12:34:25.528642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.979 passed 00:20:19.979 Test: blockdev nvme passthru rw ...passed 00:20:19.979 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:34:25.612869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.979 [2024-11-20 12:34:25.612884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.612979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.979 [2024-11-20 12:34:25.612988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.613086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.979 [2024-11-20 12:34:25.613094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:19.979 [2024-11-20 12:34:25.613182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.979 [2024-11-20 12:34:25.613191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:19.979 passed 00:20:19.979 Test: blockdev nvme admin passthru ...passed 00:20:19.979 Test: blockdev copy ...passed 00:20:19.979 00:20:19.979 Run Summary: Type Total Ran Passed Failed Inactive 00:20:19.979 suites 1 1 n/a 0 0 00:20:19.979 tests 23 23 23 0 0 00:20:19.979 asserts 152 152 152 0 n/a 00:20:19.979 00:20:19.979 Elapsed time = 1.249 seconds 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.239 rmmod nvme_tcp 00:20:20.239 rmmod nvme_fabrics 00:20:20.239 rmmod nvme_keyring 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 941235 ']' 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 941235 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 941235 ']' 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 941235 00:20:20.239 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 941235 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 941235' 00:20:20.499 killing process with pid 941235 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 941235 00:20:20.499 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 941235 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.757 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.663 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:22.663 00:20:22.663 real 0m11.046s 00:20:22.663 user 0m13.513s 00:20:22.663 sys 0m5.544s 00:20:22.663 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.663 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.663 ************************************ 00:20:22.663 END TEST nvmf_bdevio_no_huge 00:20:22.663 ************************************ 00:20:22.922 12:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:22.922 12:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.922 12:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.922 12:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.922 ************************************ 00:20:22.922 START TEST nvmf_tls 00:20:22.922 ************************************ 00:20:22.922 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:22.922 * Looking for test storage... 00:20:22.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.922 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.923 --rc genhtml_branch_coverage=1 00:20:22.923 --rc genhtml_function_coverage=1 00:20:22.923 --rc genhtml_legend=1 00:20:22.923 --rc geninfo_all_blocks=1 00:20:22.923 --rc geninfo_unexecuted_blocks=1 00:20:22.923 00:20:22.923 ' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.923 --rc genhtml_branch_coverage=1 00:20:22.923 --rc genhtml_function_coverage=1 00:20:22.923 --rc genhtml_legend=1 00:20:22.923 --rc geninfo_all_blocks=1 00:20:22.923 --rc geninfo_unexecuted_blocks=1 00:20:22.923 00:20:22.923 ' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.923 --rc genhtml_branch_coverage=1 00:20:22.923 --rc genhtml_function_coverage=1 00:20:22.923 --rc genhtml_legend=1 00:20:22.923 --rc geninfo_all_blocks=1 00:20:22.923 --rc geninfo_unexecuted_blocks=1 00:20:22.923 00:20:22.923 ' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.923 --rc genhtml_branch_coverage=1 00:20:22.923 --rc genhtml_function_coverage=1 00:20:22.923 --rc genhtml_legend=1 00:20:22.923 --rc geninfo_all_blocks=1 00:20:22.923 --rc geninfo_unexecuted_blocks=1 00:20:22.923 00:20:22.923 ' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.923 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.924 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.182 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.182 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.182 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.182 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.182 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.183 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:20:29.754 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:20:29.754 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:20:29.754 Found net devices under 0000:1a:00.0: cvl_0_0 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:20:29.754 Found net devices under 0000:1a:00.1: cvl_0_1 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.754 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:29.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:20:29.755 00:20:29.755 --- 10.0.0.2 ping statistics --- 00:20:29.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.755 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:20:29.755 00:20:29.755 --- 10.0.0.1 ping statistics --- 00:20:29.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.755 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=945437 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 945437 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 945437 ']' 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.755 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 [2024-11-20 12:34:34.921260] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:29.755 [2024-11-20 12:34:34.921302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.755 [2024-11-20 12:34:34.997086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.755 [2024-11-20 12:34:35.033622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.755 [2024-11-20 12:34:35.033654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.755 [2024-11-20 12:34:35.033660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.755 [2024-11-20 12:34:35.033665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.755 [2024-11-20 12:34:35.033669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.755 [2024-11-20 12:34:35.034225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.014 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.014 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.014 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.014 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.014 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.300 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.300 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:30.300 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:30.300 true 00:20:30.300 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.300 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:30.595 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:30.595 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:30.595 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:30.595 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.595 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:30.859 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:30.859 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:30.859 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:31.118 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.118 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:31.378 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:31.378 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:31.378 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.378 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:31.378 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:31.378 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:31.378 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:31.637 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.637 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:31.896 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:31.896 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:31.896 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:31.896 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.896 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.UrTPxHHyUE 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.cbPrW1Wn5b 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.UrTPxHHyUE 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.cbPrW1Wn5b 00:20:32.155 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:32.414 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:32.673 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.UrTPxHHyUE 00:20:32.673 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UrTPxHHyUE 00:20:32.673 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.931 [2024-11-20 12:34:38.450437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.931 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.931 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.190 [2024-11-20 12:34:38.791293] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.190 [2024-11-20 12:34:38.791568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.190 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:33.451 malloc0 00:20:33.451 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:33.451 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UrTPxHHyUE 00:20:33.709 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:33.968 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UrTPxHHyUE 00:20:43.945 Initializing NVMe Controllers 00:20:43.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.945 Initialization complete. Launching workers. 00:20:43.945 ======================================================== 00:20:43.945 Latency(us) 00:20:43.945 Device Information : IOPS MiB/s Average min max 00:20:43.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18122.75 70.79 3531.78 770.55 5793.94 00:20:43.945 ======================================================== 00:20:43.945 Total : 18122.75 70.79 3531.78 770.55 5793.94 00:20:43.945 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrTPxHHyUE 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UrTPxHHyUE 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=948116 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 948116 /var/tmp/bdevperf.sock 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 948116 ']' 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.945 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.945 [2024-11-20 12:34:49.664741] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:43.945 [2024-11-20 12:34:49.664787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948116 ] 00:20:44.204 [2024-11-20 12:34:49.738590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.204 [2024-11-20 12:34:49.778628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.204 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.204 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.204 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UrTPxHHyUE 00:20:44.464 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.464 [2024-11-20 12:34:50.187351] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.722 TLSTESTn1 00:20:44.722 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:44.722 Running I/O for 10 seconds... 00:20:47.037 6125.00 IOPS, 23.93 MiB/s [2024-11-20T11:34:53.739Z] 6230.00 IOPS, 24.34 MiB/s [2024-11-20T11:34:54.677Z] 6270.67 IOPS, 24.49 MiB/s [2024-11-20T11:34:55.615Z] 6294.25 IOPS, 24.59 MiB/s [2024-11-20T11:34:56.552Z] 6300.60 IOPS, 24.61 MiB/s [2024-11-20T11:34:57.490Z] 6223.00 IOPS, 24.31 MiB/s [2024-11-20T11:34:58.427Z] 6190.29 IOPS, 24.18 MiB/s [2024-11-20T11:34:59.804Z] 6220.12 IOPS, 24.30 MiB/s [2024-11-20T11:35:00.743Z] 6237.33 IOPS, 24.36 MiB/s [2024-11-20T11:35:00.743Z] 6254.00 IOPS, 24.43 MiB/s 00:20:54.979 Latency(us) 00:20:54.979 [2024-11-20T11:35:00.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.979 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.979 Verification LBA range: start 0x0 length 0x2000 00:20:54.979 TLSTESTn1 : 10.02 6256.72 24.44 0.00 0.00 20426.73 4676.89 28240.06 00:20:54.979 [2024-11-20T11:35:00.743Z] =================================================================================================================== 00:20:54.979 [2024-11-20T11:35:00.743Z] Total : 6256.72 24.44 0.00 0.00 20426.73 4676.89 28240.06 00:20:54.979 { 00:20:54.979 "results": [ 00:20:54.979 { 00:20:54.979 "job": "TLSTESTn1", 00:20:54.979 "core_mask": "0x4", 00:20:54.979 "workload": "verify", 00:20:54.979 "status": "finished", 00:20:54.979 "verify_range": { 00:20:54.979 "start": 0, 00:20:54.979 "length": 8192 00:20:54.979 }, 00:20:54.979 "queue_depth": 128, 00:20:54.979 "io_size": 4096, 00:20:54.979 "runtime": 10.015952, 00:20:54.979 "iops": 6256.719281402307, 00:20:54.979 "mibps": 24.440309692977763, 00:20:54.979 "io_failed": 0, 00:20:54.979 "io_timeout": 0, 00:20:54.979 "avg_latency_us": 20426.731684502647, 00:20:54.979 "min_latency_us": 4676.887272727273, 00:20:54.979 "max_latency_us": 28240.05818181818 00:20:54.979 } 00:20:54.979 ], 00:20:54.979 "core_count": 1 00:20:54.979 } 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 948116 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 948116 ']' 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 948116 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948116 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948116' 00:20:54.979 killing process with pid 948116 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 948116 00:20:54.979 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.979 00:20:54.979 Latency(us) 00:20:54.979 [2024-11-20T11:35:00.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.979 [2024-11-20T11:35:00.743Z] =================================================================================================================== 00:20:54.979 [2024-11-20T11:35:00.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 948116 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbPrW1Wn5b 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbPrW1Wn5b 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cbPrW1Wn5b 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cbPrW1Wn5b 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=949963 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 949963 /var/tmp/bdevperf.sock 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949963 ']' 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.979 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.980 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.980 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.980 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.980 [2024-11-20 12:35:00.689396] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:54.980 [2024-11-20 12:35:00.689449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949963 ] 00:20:55.238 [2024-11-20 12:35:00.752297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.239 [2024-11-20 12:35:00.791581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.239 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.239 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:55.239 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cbPrW1Wn5b 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.498 [2024-11-20 12:35:01.216586] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.498 [2024-11-20 12:35:01.226132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:55.498 [2024-11-20 12:35:01.226925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160a5b0 (107): Transport endpoint is not connected 00:20:55.498 [2024-11-20 12:35:01.227920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160a5b0 (9): Bad file descriptor 00:20:55.498 [2024-11-20 12:35:01.228922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:55.498 [2024-11-20 12:35:01.228933] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:55.498 [2024-11-20 12:35:01.228940] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:55.498 [2024-11-20 12:35:01.228949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:55.498 request: 00:20:55.498 { 00:20:55.498 "name": "TLSTEST", 00:20:55.498 "trtype": "tcp", 00:20:55.498 "traddr": "10.0.0.2", 00:20:55.498 "adrfam": "ipv4", 00:20:55.498 "trsvcid": "4420", 00:20:55.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.498 "prchk_reftag": false, 00:20:55.498 "prchk_guard": false, 00:20:55.498 "hdgst": false, 00:20:55.498 "ddgst": false, 00:20:55.498 "psk": "key0", 00:20:55.498 "allow_unrecognized_csi": false, 00:20:55.498 "method": "bdev_nvme_attach_controller", 00:20:55.498 "req_id": 1 00:20:55.498 } 00:20:55.498 Got JSON-RPC error response 00:20:55.498 response: 00:20:55.498 { 00:20:55.498 "code": -5, 00:20:55.498 "message": "Input/output error" 00:20:55.498 } 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 949963 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949963 ']' 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949963 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.498 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949963 00:20:55.756 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949963' 00:20:55.757 killing process with pid 949963 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949963 00:20:55.757 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.757 00:20:55.757 Latency(us) 00:20:55.757 [2024-11-20T11:35:01.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.757 [2024-11-20T11:35:01.521Z] =================================================================================================================== 00:20:55.757 [2024-11-20T11:35:01.521Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949963 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UrTPxHHyUE 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UrTPxHHyUE 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UrTPxHHyUE 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UrTPxHHyUE 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=950228 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 950228 /var/tmp/bdevperf.sock 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950228 ']' 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.757 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.757 [2024-11-20 12:35:01.497206] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:55.757 [2024-11-20 12:35:01.497251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950228 ] 00:20:56.015 [2024-11-20 12:35:01.564874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.015 [2024-11-20 12:35:01.601687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.015 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.015 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.016 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UrTPxHHyUE 00:20:56.274 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:56.533 [2024-11-20 12:35:02.038754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.533 [2024-11-20 12:35:02.048089] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:56.533 [2024-11-20 12:35:02.048110] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:56.533 [2024-11-20 12:35:02.048131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:56.533 [2024-11-20 12:35:02.049087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7f5b0 (107): Transport endpoint is not connected 00:20:56.533 [2024-11-20 12:35:02.050082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7f5b0 (9): Bad file descriptor 00:20:56.533 [2024-11-20 12:35:02.051084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:56.533 [2024-11-20 12:35:02.051094] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:56.533 [2024-11-20 12:35:02.051101] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:56.533 [2024-11-20 12:35:02.051110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:56.533 request: 00:20:56.533 { 00:20:56.533 "name": "TLSTEST", 00:20:56.533 "trtype": "tcp", 00:20:56.533 "traddr": "10.0.0.2", 00:20:56.533 "adrfam": "ipv4", 00:20:56.533 "trsvcid": "4420", 00:20:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.533 "prchk_reftag": false, 00:20:56.533 "prchk_guard": false, 00:20:56.533 "hdgst": false, 00:20:56.533 "ddgst": false, 00:20:56.533 "psk": "key0", 00:20:56.533 "allow_unrecognized_csi": false, 00:20:56.533 "method": "bdev_nvme_attach_controller", 00:20:56.533 "req_id": 1 00:20:56.533 } 00:20:56.533 Got JSON-RPC error response 00:20:56.533 response: 00:20:56.533 { 00:20:56.533 "code": -5, 00:20:56.533 "message": "Input/output error" 00:20:56.533 } 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 950228 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950228 ']' 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950228 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950228 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950228' 00:20:56.533 killing process with pid 950228 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950228 00:20:56.533 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.533 00:20:56.533 Latency(us) 00:20:56.533 [2024-11-20T11:35:02.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.533 [2024-11-20T11:35:02.297Z] =================================================================================================================== 00:20:56.533 [2024-11-20T11:35:02.297Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950228 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrTPxHHyUE 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrTPxHHyUE 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrTPxHHyUE 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:56.533 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UrTPxHHyUE 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=950258 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 950258 /var/tmp/bdevperf.sock 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950258 ']' 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.534 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.794 [2024-11-20 12:35:02.330365] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:56.794 [2024-11-20 12:35:02.330422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950258 ] 00:20:56.794 [2024-11-20 12:35:02.403043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.794 [2024-11-20 12:35:02.439315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.794 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.794 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.794 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UrTPxHHyUE 00:20:57.059 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:57.319 [2024-11-20 12:35:02.892345] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.319 [2024-11-20 12:35:02.900224] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:57.319 [2024-11-20 12:35:02.900244] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:57.319 [2024-11-20 12:35:02.900263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:57.319 [2024-11-20 12:35:02.900696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ae5b0 (107): Transport endpoint is not connected 00:20:57.319 [2024-11-20 12:35:02.901691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ae5b0 (9): Bad file descriptor 00:20:57.319 [2024-11-20 12:35:02.902693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:57.319 [2024-11-20 12:35:02.902704] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:57.319 [2024-11-20 12:35:02.902710] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:57.319 [2024-11-20 12:35:02.902720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:57.319 request: 00:20:57.319 { 00:20:57.319 "name": "TLSTEST", 00:20:57.319 "trtype": "tcp", 00:20:57.319 "traddr": "10.0.0.2", 00:20:57.319 "adrfam": "ipv4", 00:20:57.319 "trsvcid": "4420", 00:20:57.319 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:57.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.319 "prchk_reftag": false, 00:20:57.319 "prchk_guard": false, 00:20:57.319 "hdgst": false, 00:20:57.319 "ddgst": false, 00:20:57.319 "psk": "key0", 00:20:57.319 "allow_unrecognized_csi": false, 00:20:57.319 "method": "bdev_nvme_attach_controller", 00:20:57.319 "req_id": 1 00:20:57.319 } 00:20:57.319 Got JSON-RPC error response 00:20:57.319 response: 00:20:57.319 { 00:20:57.319 "code": -5, 00:20:57.319 "message": "Input/output error" 00:20:57.319 } 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 950258 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950258 ']' 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950258 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950258 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950258' 00:20:57.319 killing process with pid 950258 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950258 00:20:57.319 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.319 00:20:57.319 Latency(us) 00:20:57.319 [2024-11-20T11:35:03.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.319 [2024-11-20T11:35:03.083Z] =================================================================================================================== 00:20:57.319 [2024-11-20T11:35:03.083Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.319 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950258 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=950515 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 950515 /var/tmp/bdevperf.sock 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950515 ']' 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.579 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.579 [2024-11-20 12:35:03.179036] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:57.579 [2024-11-20 12:35:03.179082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950515 ] 00:20:57.579 [2024-11-20 12:35:03.247983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.579 [2024-11-20 12:35:03.281673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.838 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.838 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.838 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:57.838 [2024-11-20 12:35:03.535001] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:57.838 [2024-11-20 12:35:03.535034] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:57.838 request: 00:20:57.838 { 00:20:57.838 "name": "key0", 00:20:57.838 "path": "", 00:20:57.838 "method": "keyring_file_add_key", 00:20:57.838 "req_id": 1 00:20:57.838 } 00:20:57.838 Got JSON-RPC error response 00:20:57.838 response: 00:20:57.838 { 00:20:57.838 "code": -1, 00:20:57.838 "message": "Operation not permitted" 00:20:57.838 } 00:20:57.838 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:58.097 [2024-11-20 12:35:03.715551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.097 [2024-11-20 12:35:03.715578] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:58.097 request: 00:20:58.097 { 00:20:58.097 "name": "TLSTEST", 00:20:58.097 "trtype": "tcp", 00:20:58.097 "traddr": "10.0.0.2", 00:20:58.097 "adrfam": "ipv4", 00:20:58.097 "trsvcid": "4420", 00:20:58.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.097 "prchk_reftag": false, 00:20:58.097 "prchk_guard": false, 00:20:58.097 "hdgst": false, 00:20:58.097 "ddgst": false, 00:20:58.097 "psk": "key0", 00:20:58.097 "allow_unrecognized_csi": false, 00:20:58.097 "method": "bdev_nvme_attach_controller", 00:20:58.097 "req_id": 1 00:20:58.097 } 00:20:58.097 Got JSON-RPC error response 00:20:58.097 response: 00:20:58.097 { 00:20:58.097 "code": -126, 00:20:58.097 "message": "Required key not available" 00:20:58.097 } 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 950515 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950515 ']' 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950515 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950515 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950515' 00:20:58.097 killing process with pid 950515 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950515 00:20:58.097 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.097 00:20:58.097 Latency(us) 00:20:58.097 [2024-11-20T11:35:03.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.097 [2024-11-20T11:35:03.861Z] =================================================================================================================== 00:20:58.097 [2024-11-20T11:35:03.861Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.097 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950515 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 945437 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 945437 ']' 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 945437 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 945437 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 945437' 00:20:58.355 killing process with pid 945437 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 945437 00:20:58.355 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 945437 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.l7YMJyAa8x 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.l7YMJyAa8x 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=950794 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 950794 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950794 ']' 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.615 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.615 [2024-11-20 12:35:04.270527] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:58.615 [2024-11-20 12:35:04.270577] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.615 [2024-11-20 12:35:04.344256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.874 [2024-11-20 12:35:04.381486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.874 [2024-11-20 12:35:04.381519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.875 [2024-11-20 12:35:04.381526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.875 [2024-11-20 12:35:04.381531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.875 [2024-11-20 12:35:04.381536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.875 [2024-11-20 12:35:04.382107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.l7YMJyAa8x 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.l7YMJyAa8x 00:20:59.442 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.702 [2024-11-20 12:35:05.268124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.702 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.961 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.961 [2024-11-20 12:35:05.617018] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.961 [2024-11-20 12:35:05.617236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.961 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.220 malloc0 00:21:00.220 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.220 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:00.480 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7YMJyAa8x 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l7YMJyAa8x 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=951088 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 951088 /var/tmp/bdevperf.sock 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 951088 ']' 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.739 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.739 [2024-11-20 12:35:06.367061] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:00.739 [2024-11-20 12:35:06.367105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951088 ] 00:21:00.739 [2024-11-20 12:35:06.440296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.739 [2024-11-20 12:35:06.478938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.679 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.679 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:01.679 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:01.679 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:01.937 [2024-11-20 12:35:07.505355] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.937 TLSTESTn1 00:21:01.937 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.937 Running I/O for 10 seconds... 00:21:04.253 5791.00 IOPS, 22.62 MiB/s [2024-11-20T11:35:10.952Z] 5541.00 IOPS, 21.64 MiB/s [2024-11-20T11:35:11.889Z] 5692.33 IOPS, 22.24 MiB/s [2024-11-20T11:35:12.826Z] 5817.25 IOPS, 22.72 MiB/s [2024-11-20T11:35:13.762Z] 5798.40 IOPS, 22.65 MiB/s [2024-11-20T11:35:15.140Z] 5683.00 IOPS, 22.20 MiB/s [2024-11-20T11:35:15.708Z] 5740.14 IOPS, 22.42 MiB/s [2024-11-20T11:35:17.085Z] 5786.88 IOPS, 22.60 MiB/s [2024-11-20T11:35:18.021Z] 5781.11 IOPS, 22.58 MiB/s [2024-11-20T11:35:18.021Z] 5797.60 IOPS, 22.65 MiB/s 00:21:12.257 Latency(us) 00:21:12.257 [2024-11-20T11:35:18.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.257 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.257 Verification LBA range: start 0x0 length 0x2000 00:21:12.257 TLSTESTn1 : 10.02 5800.63 22.66 0.00 0.00 22032.12 5064.15 51475.55 00:21:12.257 [2024-11-20T11:35:18.021Z] =================================================================================================================== 00:21:12.257 [2024-11-20T11:35:18.021Z] Total : 5800.63 22.66 0.00 0.00 22032.12 5064.15 51475.55 00:21:12.257 { 00:21:12.257 "results": [ 00:21:12.257 { 00:21:12.257 "job": "TLSTESTn1", 00:21:12.257 "core_mask": "0x4", 00:21:12.257 "workload": "verify", 00:21:12.257 "status": "finished", 00:21:12.257 "verify_range": { 00:21:12.257 "start": 0, 00:21:12.257 "length": 8192 00:21:12.257 }, 00:21:12.257 "queue_depth": 128, 00:21:12.257 "io_size": 4096, 00:21:12.257 "runtime": 10.016679, 00:21:12.257 "iops": 5800.625137333442, 00:21:12.257 "mibps": 22.658691942708757, 00:21:12.257 "io_failed": 0, 00:21:12.257 "io_timeout": 0, 00:21:12.257 "avg_latency_us": 22032.121968228836, 00:21:12.257 "min_latency_us": 5064.145454545454, 00:21:12.257 "max_latency_us": 51475.54909090909 00:21:12.257 } 00:21:12.257 ], 00:21:12.257 "core_count": 1 00:21:12.257 } 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 951088 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 951088 ']' 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 951088 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951088 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951088' 00:21:12.257 killing process with pid 951088 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 951088 00:21:12.257 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.257 00:21:12.257 Latency(us) 00:21:12.257 [2024-11-20T11:35:18.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.257 [2024-11-20T11:35:18.021Z] =================================================================================================================== 00:21:12.257 [2024-11-20T11:35:18.021Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 951088 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.l7YMJyAa8x 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7YMJyAa8x 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7YMJyAa8x 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7YMJyAa8x 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l7YMJyAa8x 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=953186 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 953186 /var/tmp/bdevperf.sock 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 953186 ']' 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.257 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.257 [2024-11-20 12:35:18.010341] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:12.257 [2024-11-20 12:35:18.010387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953186 ] 00:21:12.516 [2024-11-20 12:35:18.079478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.516 [2024-11-20 12:35:18.118622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.516 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.516 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:12.516 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:12.775 [2024-11-20 12:35:18.356119] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.l7YMJyAa8x': 0100666 00:21:12.775 [2024-11-20 12:35:18.356143] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:12.775 request: 00:21:12.775 { 00:21:12.775 "name": "key0", 00:21:12.775 "path": "/tmp/tmp.l7YMJyAa8x", 00:21:12.775 "method": "keyring_file_add_key", 00:21:12.775 "req_id": 1 00:21:12.775 } 00:21:12.775 Got JSON-RPC error response 00:21:12.775 response: 00:21:12.775 { 00:21:12.775 "code": -1, 00:21:12.775 "message": "Operation not permitted" 00:21:12.775 } 00:21:12.775 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:13.035 [2024-11-20 12:35:18.552699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.035 [2024-11-20 12:35:18.552728] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:13.035 request: 00:21:13.035 { 00:21:13.035 "name": "TLSTEST", 00:21:13.035 "trtype": "tcp", 00:21:13.035 "traddr": "10.0.0.2", 00:21:13.035 "adrfam": "ipv4", 00:21:13.035 "trsvcid": "4420", 00:21:13.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.035 "prchk_reftag": false, 00:21:13.035 "prchk_guard": false, 00:21:13.035 "hdgst": false, 00:21:13.035 "ddgst": false, 00:21:13.035 "psk": "key0", 00:21:13.035 "allow_unrecognized_csi": false, 00:21:13.035 "method": "bdev_nvme_attach_controller", 00:21:13.035 "req_id": 1 00:21:13.035 } 00:21:13.035 Got JSON-RPC error response 00:21:13.035 response: 00:21:13.035 { 00:21:13.035 "code": -126, 00:21:13.035 "message": "Required key not available" 00:21:13.035 } 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 953186 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 953186 ']' 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 953186 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953186 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953186' 00:21:13.035 killing process with pid 953186 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 953186 00:21:13.035 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.035 00:21:13.035 Latency(us) 00:21:13.035 [2024-11-20T11:35:18.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.035 [2024-11-20T11:35:18.799Z] =================================================================================================================== 00:21:13.035 [2024-11-20T11:35:18.799Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 953186 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 950794 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950794 ']' 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950794 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.035 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950794 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950794' 00:21:13.294 killing process with pid 950794 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950794 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950794 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:13.294 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.295 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.295 12:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=953458 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 953458 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 953458 ']' 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.295 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.295 [2024-11-20 12:35:19.052277] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:13.295 [2024-11-20 12:35:19.052321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.554 [2024-11-20 12:35:19.118536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.554 [2024-11-20 12:35:19.155589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.554 [2024-11-20 12:35:19.155623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.554 [2024-11-20 12:35:19.155630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.554 [2024-11-20 12:35:19.155635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.554 [2024-11-20 12:35:19.155639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.554 [2024-11-20 12:35:19.156209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.l7YMJyAa8x 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.l7YMJyAa8x 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.l7YMJyAa8x 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.l7YMJyAa8x 00:21:13.554 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:13.813 [2024-11-20 12:35:19.457143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.813 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:14.072 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:14.072 [2024-11-20 12:35:19.802030] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.072 [2024-11-20 12:35:19.802255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.072 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.331 malloc0 00:21:14.332 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:14.591 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:14.591 [2024-11-20 12:35:20.335244] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.l7YMJyAa8x': 0100666 00:21:14.591 [2024-11-20 12:35:20.335272] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:14.591 request: 00:21:14.591 { 00:21:14.591 "name": "key0", 00:21:14.591 "path": "/tmp/tmp.l7YMJyAa8x", 00:21:14.591 "method": "keyring_file_add_key", 00:21:14.591 "req_id": 1 00:21:14.591 } 00:21:14.591 Got JSON-RPC error response 00:21:14.591 response: 00:21:14.591 { 00:21:14.591 "code": -1, 00:21:14.591 "message": "Operation not permitted" 00:21:14.591 } 00:21:14.591 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:14.850 [2024-11-20 12:35:20.511730] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:14.850 [2024-11-20 12:35:20.511764] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:14.850 request: 00:21:14.850 { 00:21:14.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.850 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.850 "psk": "key0", 00:21:14.850 "method": "nvmf_subsystem_add_host", 00:21:14.850 "req_id": 1 00:21:14.850 } 00:21:14.850 Got JSON-RPC error response 00:21:14.850 response: 00:21:14.850 { 00:21:14.850 "code": -32603, 00:21:14.850 "message": "Internal error" 00:21:14.850 } 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 953458 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 953458 ']' 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 953458 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953458 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953458' 00:21:14.850 killing process with pid 953458 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 953458 00:21:14.850 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 953458 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.l7YMJyAa8x 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=953755 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 953755 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 953755 ']' 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.109 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.109 [2024-11-20 12:35:20.806993] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:15.109 [2024-11-20 12:35:20.807038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.368 [2024-11-20 12:35:20.873976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.368 [2024-11-20 12:35:20.911310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.368 [2024-11-20 12:35:20.911344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.368 [2024-11-20 12:35:20.911350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.368 [2024-11-20 12:35:20.911356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.368 [2024-11-20 12:35:20.911361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.368 [2024-11-20 12:35:20.911931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.368 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.368 12:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.368 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:15.368 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.368 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.368 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.368 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.l7YMJyAa8x 00:21:15.368 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.l7YMJyAa8x 00:21:15.369 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.627 [2024-11-20 12:35:21.196216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.627 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:15.627 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:15.886 [2024-11-20 12:35:21.541102] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.886 [2024-11-20 12:35:21.541323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.886 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.146 malloc0 00:21:16.146 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.146 12:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:16.405 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=954043 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 954043 /var/tmp/bdevperf.sock 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 954043 ']' 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.664 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.664 [2024-11-20 12:35:22.265337] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:16.664 [2024-11-20 12:35:22.265381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954043 ] 00:21:16.664 [2024-11-20 12:35:22.338938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.664 [2024-11-20 12:35:22.375962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.923 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.923 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.923 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:16.923 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.182 [2024-11-20 12:35:22.809826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.182 TLSTESTn1 00:21:17.182 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:17.441 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:17.441 "subsystems": [ 00:21:17.441 { 00:21:17.441 "subsystem": "keyring", 00:21:17.441 "config": [ 00:21:17.441 { 00:21:17.441 "method": "keyring_file_add_key", 00:21:17.441 "params": { 00:21:17.441 "name": "key0", 00:21:17.441 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:17.441 } 00:21:17.441 } 00:21:17.441 ] 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "subsystem": "iobuf", 00:21:17.441 "config": [ 00:21:17.441 { 00:21:17.441 "method": "iobuf_set_options", 00:21:17.441 "params": { 00:21:17.441 "small_pool_count": 8192, 00:21:17.441 "large_pool_count": 1024, 00:21:17.441 "small_bufsize": 8192, 00:21:17.441 "large_bufsize": 135168, 00:21:17.441 "enable_numa": false 00:21:17.441 } 00:21:17.441 } 00:21:17.441 ] 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "subsystem": "sock", 00:21:17.441 "config": [ 00:21:17.441 { 00:21:17.441 "method": "sock_set_default_impl", 00:21:17.441 "params": { 00:21:17.441 "impl_name": "posix" 00:21:17.441 } 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "method": "sock_impl_set_options", 00:21:17.441 "params": { 00:21:17.441 "impl_name": "ssl", 00:21:17.441 "recv_buf_size": 4096, 00:21:17.441 "send_buf_size": 4096, 00:21:17.441 "enable_recv_pipe": true, 00:21:17.441 "enable_quickack": false, 00:21:17.441 "enable_placement_id": 0, 00:21:17.441 "enable_zerocopy_send_server": true, 00:21:17.441 "enable_zerocopy_send_client": false, 00:21:17.441 "zerocopy_threshold": 0, 00:21:17.441 "tls_version": 0, 00:21:17.441 "enable_ktls": false 00:21:17.441 } 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "method": "sock_impl_set_options", 00:21:17.441 "params": { 00:21:17.441 "impl_name": "posix", 00:21:17.441 "recv_buf_size": 2097152, 00:21:17.441 "send_buf_size": 2097152, 00:21:17.441 "enable_recv_pipe": true, 00:21:17.441 "enable_quickack": false, 00:21:17.441 "enable_placement_id": 0, 00:21:17.441 "enable_zerocopy_send_server": true, 00:21:17.441 "enable_zerocopy_send_client": false, 00:21:17.441 "zerocopy_threshold": 0, 00:21:17.441 "tls_version": 0, 00:21:17.441 "enable_ktls": false 00:21:17.441 } 00:21:17.441 } 00:21:17.441 ] 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "subsystem": "vmd", 00:21:17.441 "config": [] 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "subsystem": "accel", 00:21:17.441 "config": [ 00:21:17.441 { 00:21:17.441 "method": "accel_set_options", 00:21:17.441 "params": { 00:21:17.441 "small_cache_size": 128, 00:21:17.441 "large_cache_size": 16, 00:21:17.441 "task_count": 2048, 00:21:17.441 "sequence_count": 2048, 00:21:17.441 "buf_count": 2048 00:21:17.441 } 00:21:17.441 } 00:21:17.441 ] 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "subsystem": "bdev", 00:21:17.441 "config": [ 00:21:17.441 { 00:21:17.441 "method": "bdev_set_options", 00:21:17.441 "params": { 00:21:17.441 "bdev_io_pool_size": 65535, 00:21:17.441 "bdev_io_cache_size": 256, 00:21:17.441 "bdev_auto_examine": true, 00:21:17.441 "iobuf_small_cache_size": 128, 00:21:17.441 "iobuf_large_cache_size": 16 00:21:17.441 } 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "method": "bdev_raid_set_options", 00:21:17.441 "params": { 00:21:17.441 "process_window_size_kb": 1024, 00:21:17.441 "process_max_bandwidth_mb_sec": 0 00:21:17.441 } 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "method": "bdev_iscsi_set_options", 00:21:17.441 "params": { 00:21:17.441 "timeout_sec": 30 00:21:17.441 } 00:21:17.441 }, 00:21:17.441 { 00:21:17.441 "method": "bdev_nvme_set_options", 00:21:17.441 "params": { 00:21:17.441 "action_on_timeout": "none", 00:21:17.441 "timeout_us": 0, 00:21:17.441 "timeout_admin_us": 0, 00:21:17.441 "keep_alive_timeout_ms": 10000, 00:21:17.441 "arbitration_burst": 0, 00:21:17.441 "low_priority_weight": 0, 00:21:17.441 "medium_priority_weight": 0, 00:21:17.441 "high_priority_weight": 0, 00:21:17.441 "nvme_adminq_poll_period_us": 10000, 00:21:17.441 "nvme_ioq_poll_period_us": 0, 00:21:17.441 "io_queue_requests": 0, 00:21:17.441 "delay_cmd_submit": true, 00:21:17.441 "transport_retry_count": 4, 00:21:17.441 "bdev_retry_count": 3, 00:21:17.441 "transport_ack_timeout": 0, 00:21:17.441 "ctrlr_loss_timeout_sec": 0, 00:21:17.441 "reconnect_delay_sec": 0, 00:21:17.441 "fast_io_fail_timeout_sec": 0, 00:21:17.441 "disable_auto_failback": false, 00:21:17.441 "generate_uuids": false, 00:21:17.441 "transport_tos": 0, 00:21:17.441 "nvme_error_stat": false, 00:21:17.441 "rdma_srq_size": 0, 00:21:17.441 "io_path_stat": false, 00:21:17.441 "allow_accel_sequence": false, 00:21:17.441 "rdma_max_cq_size": 0, 00:21:17.441 "rdma_cm_event_timeout_ms": 0, 00:21:17.442 "dhchap_digests": [ 00:21:17.442 "sha256", 00:21:17.442 "sha384", 00:21:17.442 "sha512" 00:21:17.442 ], 00:21:17.442 "dhchap_dhgroups": [ 00:21:17.442 "null", 00:21:17.442 "ffdhe2048", 00:21:17.442 "ffdhe3072", 00:21:17.442 "ffdhe4096", 00:21:17.442 "ffdhe6144", 00:21:17.442 "ffdhe8192" 00:21:17.442 ] 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "bdev_nvme_set_hotplug", 00:21:17.442 "params": { 00:21:17.442 "period_us": 100000, 00:21:17.442 "enable": false 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "bdev_malloc_create", 00:21:17.442 "params": { 00:21:17.442 "name": "malloc0", 00:21:17.442 "num_blocks": 8192, 00:21:17.442 "block_size": 4096, 00:21:17.442 "physical_block_size": 4096, 00:21:17.442 "uuid": "34af1093-bf06-4d53-8f1c-d313c839958b", 00:21:17.442 "optimal_io_boundary": 0, 00:21:17.442 "md_size": 0, 00:21:17.442 "dif_type": 0, 00:21:17.442 "dif_is_head_of_md": false, 00:21:17.442 "dif_pi_format": 0 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "bdev_wait_for_examine" 00:21:17.442 } 00:21:17.442 ] 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "subsystem": "nbd", 00:21:17.442 "config": [] 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "subsystem": "scheduler", 00:21:17.442 "config": [ 00:21:17.442 { 00:21:17.442 "method": "framework_set_scheduler", 00:21:17.442 "params": { 00:21:17.442 "name": "static" 00:21:17.442 } 00:21:17.442 } 00:21:17.442 ] 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "subsystem": "nvmf", 00:21:17.442 "config": [ 00:21:17.442 { 00:21:17.442 "method": "nvmf_set_config", 00:21:17.442 "params": { 00:21:17.442 "discovery_filter": "match_any", 00:21:17.442 "admin_cmd_passthru": { 00:21:17.442 "identify_ctrlr": false 00:21:17.442 }, 00:21:17.442 "dhchap_digests": [ 00:21:17.442 "sha256", 00:21:17.442 "sha384", 00:21:17.442 "sha512" 00:21:17.442 ], 00:21:17.442 "dhchap_dhgroups": [ 00:21:17.442 "null", 00:21:17.442 "ffdhe2048", 00:21:17.442 "ffdhe3072", 00:21:17.442 "ffdhe4096", 00:21:17.442 "ffdhe6144", 00:21:17.442 "ffdhe8192" 00:21:17.442 ] 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_set_max_subsystems", 00:21:17.442 "params": { 00:21:17.442 "max_subsystems": 1024 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_set_crdt", 00:21:17.442 "params": { 00:21:17.442 "crdt1": 0, 00:21:17.442 "crdt2": 0, 00:21:17.442 "crdt3": 0 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_create_transport", 00:21:17.442 "params": { 00:21:17.442 "trtype": "TCP", 00:21:17.442 "max_queue_depth": 128, 00:21:17.442 "max_io_qpairs_per_ctrlr": 127, 00:21:17.442 "in_capsule_data_size": 4096, 00:21:17.442 "max_io_size": 131072, 00:21:17.442 "io_unit_size": 131072, 00:21:17.442 "max_aq_depth": 128, 00:21:17.442 "num_shared_buffers": 511, 00:21:17.442 "buf_cache_size": 4294967295, 00:21:17.442 "dif_insert_or_strip": false, 00:21:17.442 "zcopy": false, 00:21:17.442 "c2h_success": false, 00:21:17.442 "sock_priority": 0, 00:21:17.442 "abort_timeout_sec": 1, 00:21:17.442 "ack_timeout": 0, 00:21:17.442 "data_wr_pool_size": 0 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_create_subsystem", 00:21:17.442 "params": { 00:21:17.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.442 "allow_any_host": false, 00:21:17.442 "serial_number": "SPDK00000000000001", 00:21:17.442 "model_number": "SPDK bdev Controller", 00:21:17.442 "max_namespaces": 10, 00:21:17.442 "min_cntlid": 1, 00:21:17.442 "max_cntlid": 65519, 00:21:17.442 "ana_reporting": false 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_subsystem_add_host", 00:21:17.442 "params": { 00:21:17.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.442 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.442 "psk": "key0" 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_subsystem_add_ns", 00:21:17.442 "params": { 00:21:17.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.442 "namespace": { 00:21:17.442 "nsid": 1, 00:21:17.442 "bdev_name": "malloc0", 00:21:17.442 "nguid": "34AF1093BF064D538F1CD313C839958B", 00:21:17.442 "uuid": "34af1093-bf06-4d53-8f1c-d313c839958b", 00:21:17.442 "no_auto_visible": false 00:21:17.442 } 00:21:17.442 } 00:21:17.442 }, 00:21:17.442 { 00:21:17.442 "method": "nvmf_subsystem_add_listener", 00:21:17.442 "params": { 00:21:17.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.442 "listen_address": { 00:21:17.442 "trtype": "TCP", 00:21:17.442 "adrfam": "IPv4", 00:21:17.442 "traddr": "10.0.0.2", 00:21:17.442 "trsvcid": "4420" 00:21:17.442 }, 00:21:17.442 "secure_channel": true 00:21:17.442 } 00:21:17.442 } 00:21:17.442 ] 00:21:17.442 } 00:21:17.442 ] 00:21:17.442 }' 00:21:17.442 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:17.701 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:17.701 "subsystems": [ 00:21:17.701 { 00:21:17.701 "subsystem": "keyring", 00:21:17.701 "config": [ 00:21:17.701 { 00:21:17.701 "method": "keyring_file_add_key", 00:21:17.701 "params": { 00:21:17.701 "name": "key0", 00:21:17.701 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:17.701 } 00:21:17.701 } 00:21:17.701 ] 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "subsystem": "iobuf", 00:21:17.701 "config": [ 00:21:17.701 { 00:21:17.701 "method": "iobuf_set_options", 00:21:17.701 "params": { 00:21:17.701 "small_pool_count": 8192, 00:21:17.701 "large_pool_count": 1024, 00:21:17.701 "small_bufsize": 8192, 00:21:17.701 "large_bufsize": 135168, 00:21:17.701 "enable_numa": false 00:21:17.701 } 00:21:17.701 } 00:21:17.701 ] 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "subsystem": "sock", 00:21:17.701 "config": [ 00:21:17.701 { 00:21:17.702 "method": "sock_set_default_impl", 00:21:17.702 "params": { 00:21:17.702 "impl_name": "posix" 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "sock_impl_set_options", 00:21:17.702 "params": { 00:21:17.702 "impl_name": "ssl", 00:21:17.702 "recv_buf_size": 4096, 00:21:17.702 "send_buf_size": 4096, 00:21:17.702 "enable_recv_pipe": true, 00:21:17.702 "enable_quickack": false, 00:21:17.702 "enable_placement_id": 0, 00:21:17.702 "enable_zerocopy_send_server": true, 00:21:17.702 "enable_zerocopy_send_client": false, 00:21:17.702 "zerocopy_threshold": 0, 00:21:17.702 "tls_version": 0, 00:21:17.702 "enable_ktls": false 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "sock_impl_set_options", 00:21:17.702 "params": { 00:21:17.702 "impl_name": "posix", 00:21:17.702 "recv_buf_size": 2097152, 00:21:17.702 "send_buf_size": 2097152, 00:21:17.702 "enable_recv_pipe": true, 00:21:17.702 "enable_quickack": false, 00:21:17.702 "enable_placement_id": 0, 00:21:17.702 "enable_zerocopy_send_server": true, 00:21:17.702 "enable_zerocopy_send_client": false, 00:21:17.702 "zerocopy_threshold": 0, 00:21:17.702 "tls_version": 0, 00:21:17.702 "enable_ktls": false 00:21:17.702 } 00:21:17.702 } 00:21:17.702 ] 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "subsystem": "vmd", 00:21:17.702 "config": [] 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "subsystem": "accel", 00:21:17.702 "config": [ 00:21:17.702 { 00:21:17.702 "method": "accel_set_options", 00:21:17.702 "params": { 00:21:17.702 "small_cache_size": 128, 00:21:17.702 "large_cache_size": 16, 00:21:17.702 "task_count": 2048, 00:21:17.702 "sequence_count": 2048, 00:21:17.702 "buf_count": 2048 00:21:17.702 } 00:21:17.702 } 00:21:17.702 ] 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "subsystem": "bdev", 00:21:17.702 "config": [ 00:21:17.702 { 00:21:17.702 "method": "bdev_set_options", 00:21:17.702 "params": { 00:21:17.702 "bdev_io_pool_size": 65535, 00:21:17.702 "bdev_io_cache_size": 256, 00:21:17.702 "bdev_auto_examine": true, 00:21:17.702 "iobuf_small_cache_size": 128, 00:21:17.702 "iobuf_large_cache_size": 16 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "bdev_raid_set_options", 00:21:17.702 "params": { 00:21:17.702 "process_window_size_kb": 1024, 00:21:17.702 "process_max_bandwidth_mb_sec": 0 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "bdev_iscsi_set_options", 00:21:17.702 "params": { 00:21:17.702 "timeout_sec": 30 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "bdev_nvme_set_options", 00:21:17.702 "params": { 00:21:17.702 "action_on_timeout": "none", 00:21:17.702 "timeout_us": 0, 00:21:17.702 "timeout_admin_us": 0, 00:21:17.702 "keep_alive_timeout_ms": 10000, 00:21:17.702 "arbitration_burst": 0, 00:21:17.702 "low_priority_weight": 0, 00:21:17.702 "medium_priority_weight": 0, 00:21:17.702 "high_priority_weight": 0, 00:21:17.702 "nvme_adminq_poll_period_us": 10000, 00:21:17.702 "nvme_ioq_poll_period_us": 0, 00:21:17.702 "io_queue_requests": 512, 00:21:17.702 "delay_cmd_submit": true, 00:21:17.702 "transport_retry_count": 4, 00:21:17.702 "bdev_retry_count": 3, 00:21:17.702 "transport_ack_timeout": 0, 00:21:17.702 "ctrlr_loss_timeout_sec": 0, 00:21:17.702 "reconnect_delay_sec": 0, 00:21:17.702 "fast_io_fail_timeout_sec": 0, 00:21:17.702 "disable_auto_failback": false, 00:21:17.702 "generate_uuids": false, 00:21:17.702 "transport_tos": 0, 00:21:17.702 "nvme_error_stat": false, 00:21:17.702 "rdma_srq_size": 0, 00:21:17.702 "io_path_stat": false, 00:21:17.702 "allow_accel_sequence": false, 00:21:17.702 "rdma_max_cq_size": 0, 00:21:17.702 "rdma_cm_event_timeout_ms": 0, 00:21:17.702 "dhchap_digests": [ 00:21:17.702 "sha256", 00:21:17.702 "sha384", 00:21:17.702 "sha512" 00:21:17.702 ], 00:21:17.702 "dhchap_dhgroups": [ 00:21:17.702 "null", 00:21:17.702 "ffdhe2048", 00:21:17.702 "ffdhe3072", 00:21:17.702 "ffdhe4096", 00:21:17.702 "ffdhe6144", 00:21:17.702 "ffdhe8192" 00:21:17.702 ] 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "bdev_nvme_attach_controller", 00:21:17.702 "params": { 00:21:17.702 "name": "TLSTEST", 00:21:17.702 "trtype": "TCP", 00:21:17.702 "adrfam": "IPv4", 00:21:17.702 "traddr": "10.0.0.2", 00:21:17.702 "trsvcid": "4420", 00:21:17.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.702 "prchk_reftag": false, 00:21:17.702 "prchk_guard": false, 00:21:17.702 "ctrlr_loss_timeout_sec": 0, 00:21:17.702 "reconnect_delay_sec": 0, 00:21:17.702 "fast_io_fail_timeout_sec": 0, 00:21:17.702 "psk": "key0", 00:21:17.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.702 "hdgst": false, 00:21:17.702 "ddgst": false, 00:21:17.702 "multipath": "multipath" 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "bdev_nvme_set_hotplug", 00:21:17.702 "params": { 00:21:17.702 "period_us": 100000, 00:21:17.702 "enable": false 00:21:17.702 } 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "method": "bdev_wait_for_examine" 00:21:17.702 } 00:21:17.702 ] 00:21:17.702 }, 00:21:17.702 { 00:21:17.702 "subsystem": "nbd", 00:21:17.702 "config": [] 00:21:17.702 } 00:21:17.702 ] 00:21:17.702 }' 00:21:17.702 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 954043 00:21:17.702 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 954043 ']' 00:21:17.702 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 954043 00:21:17.702 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.702 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.702 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 954043 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 954043' 00:21:17.966 killing process with pid 954043 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 954043 00:21:17.966 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.966 00:21:17.966 Latency(us) 00:21:17.966 [2024-11-20T11:35:23.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.966 [2024-11-20T11:35:23.730Z] =================================================================================================================== 00:21:17.966 [2024-11-20T11:35:23.730Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 954043 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 953755 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 953755 ']' 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 953755 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953755 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953755' 00:21:17.966 killing process with pid 953755 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 953755 00:21:17.966 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 953755 00:21:18.284 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:18.284 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.284 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.284 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:18.284 "subsystems": [ 00:21:18.284 { 00:21:18.284 "subsystem": "keyring", 00:21:18.284 "config": [ 00:21:18.284 { 00:21:18.284 "method": "keyring_file_add_key", 00:21:18.284 "params": { 00:21:18.284 "name": "key0", 00:21:18.284 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:18.284 } 00:21:18.284 } 00:21:18.284 ] 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "subsystem": "iobuf", 00:21:18.284 "config": [ 00:21:18.284 { 00:21:18.284 "method": "iobuf_set_options", 00:21:18.284 "params": { 00:21:18.284 "small_pool_count": 8192, 00:21:18.284 "large_pool_count": 1024, 00:21:18.284 "small_bufsize": 8192, 00:21:18.284 "large_bufsize": 135168, 00:21:18.284 "enable_numa": false 00:21:18.284 } 00:21:18.284 } 00:21:18.284 ] 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "subsystem": "sock", 00:21:18.284 "config": [ 00:21:18.284 { 00:21:18.284 "method": "sock_set_default_impl", 00:21:18.284 "params": { 00:21:18.284 "impl_name": "posix" 00:21:18.284 } 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "method": "sock_impl_set_options", 00:21:18.284 "params": { 00:21:18.284 "impl_name": "ssl", 00:21:18.284 "recv_buf_size": 4096, 00:21:18.284 "send_buf_size": 4096, 00:21:18.284 "enable_recv_pipe": true, 00:21:18.284 "enable_quickack": false, 00:21:18.284 "enable_placement_id": 0, 00:21:18.284 "enable_zerocopy_send_server": true, 00:21:18.284 "enable_zerocopy_send_client": false, 00:21:18.284 "zerocopy_threshold": 0, 00:21:18.284 "tls_version": 0, 00:21:18.284 "enable_ktls": false 00:21:18.284 } 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "method": "sock_impl_set_options", 00:21:18.284 "params": { 00:21:18.284 "impl_name": "posix", 00:21:18.284 "recv_buf_size": 2097152, 00:21:18.284 "send_buf_size": 2097152, 00:21:18.284 "enable_recv_pipe": true, 00:21:18.284 "enable_quickack": false, 00:21:18.284 "enable_placement_id": 0, 00:21:18.284 "enable_zerocopy_send_server": true, 00:21:18.284 "enable_zerocopy_send_client": false, 00:21:18.284 "zerocopy_threshold": 0, 00:21:18.284 "tls_version": 0, 00:21:18.284 "enable_ktls": false 00:21:18.284 } 00:21:18.284 } 00:21:18.284 ] 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "subsystem": "vmd", 00:21:18.284 "config": [] 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "subsystem": "accel", 00:21:18.284 "config": [ 00:21:18.284 { 00:21:18.284 "method": "accel_set_options", 00:21:18.284 "params": { 00:21:18.284 "small_cache_size": 128, 00:21:18.284 "large_cache_size": 16, 00:21:18.284 "task_count": 2048, 00:21:18.284 "sequence_count": 2048, 00:21:18.284 "buf_count": 2048 00:21:18.284 } 00:21:18.284 } 00:21:18.284 ] 00:21:18.284 }, 00:21:18.284 { 00:21:18.284 "subsystem": "bdev", 00:21:18.284 "config": [ 00:21:18.284 { 00:21:18.284 "method": "bdev_set_options", 00:21:18.284 "params": { 00:21:18.284 "bdev_io_pool_size": 65535, 00:21:18.284 "bdev_io_cache_size": 256, 00:21:18.284 "bdev_auto_examine": true, 00:21:18.284 "iobuf_small_cache_size": 128, 00:21:18.285 "iobuf_large_cache_size": 16 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "bdev_raid_set_options", 00:21:18.285 "params": { 00:21:18.285 "process_window_size_kb": 1024, 00:21:18.285 "process_max_bandwidth_mb_sec": 0 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "bdev_iscsi_set_options", 00:21:18.285 "params": { 00:21:18.285 "timeout_sec": 30 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "bdev_nvme_set_options", 00:21:18.285 "params": { 00:21:18.285 "action_on_timeout": "none", 00:21:18.285 "timeout_us": 0, 00:21:18.285 "timeout_admin_us": 0, 00:21:18.285 "keep_alive_timeout_ms": 10000, 00:21:18.285 "arbitration_burst": 0, 00:21:18.285 "low_priority_weight": 0, 00:21:18.285 "medium_priority_weight": 0, 00:21:18.285 "high_priority_weight": 0, 00:21:18.285 "nvme_adminq_poll_period_us": 10000, 00:21:18.285 "nvme_ioq_poll_period_us": 0, 00:21:18.285 "io_queue_requests": 0, 00:21:18.285 "delay_cmd_submit": true, 00:21:18.285 "transport_retry_count": 4, 00:21:18.285 "bdev_retry_count": 3, 00:21:18.285 "transport_ack_timeout": 0, 00:21:18.285 "ctrlr_loss_timeout_sec": 0, 00:21:18.285 "reconnect_delay_sec": 0, 00:21:18.285 "fast_io_fail_timeout_sec": 0, 00:21:18.285 "disable_auto_failback": false, 00:21:18.285 "generate_uuids": false, 00:21:18.285 "transport_tos": 0, 00:21:18.285 "nvme_error_stat": false, 00:21:18.285 "rdma_srq_size": 0, 00:21:18.285 "io_path_stat": false, 00:21:18.285 "allow_accel_sequence": false, 00:21:18.285 "rdma_max_cq_size": 0, 00:21:18.285 "rdma_cm_event_timeout_ms": 0, 00:21:18.285 "dhchap_digests": [ 00:21:18.285 "sha256", 00:21:18.285 "sha384", 00:21:18.285 "sha512" 00:21:18.285 ], 00:21:18.285 "dhchap_dhgroups": [ 00:21:18.285 "null", 00:21:18.285 "ffdhe2048", 00:21:18.285 "ffdhe3072", 00:21:18.285 "ffdhe4096", 00:21:18.285 "ffdhe6144", 00:21:18.285 "ffdhe8192" 00:21:18.285 ] 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "bdev_nvme_set_hotplug", 00:21:18.285 "params": { 00:21:18.285 "period_us": 100000, 00:21:18.285 "enable": false 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "bdev_malloc_create", 00:21:18.285 "params": { 00:21:18.285 "name": "malloc0", 00:21:18.285 "num_blocks": 8192, 00:21:18.285 "block_size": 4096, 00:21:18.285 "physical_block_size": 4096, 00:21:18.285 "uuid": "34af1093-bf06-4d53-8f1c-d313c839958b", 00:21:18.285 "optimal_io_boundary": 0, 00:21:18.285 "md_size": 0, 00:21:18.285 "dif_type": 0, 00:21:18.285 "dif_is_head_of_md": false, 00:21:18.285 "dif_pi_format": 0 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "bdev_wait_for_examine" 00:21:18.285 } 00:21:18.285 ] 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "subsystem": "nbd", 00:21:18.285 "config": [] 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "subsystem": "scheduler", 00:21:18.285 "config": [ 00:21:18.285 { 00:21:18.285 "method": "framework_set_scheduler", 00:21:18.285 "params": { 00:21:18.285 "name": "static" 00:21:18.285 } 00:21:18.285 } 00:21:18.285 ] 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "subsystem": "nvmf", 00:21:18.285 "config": [ 00:21:18.285 { 00:21:18.285 "method": "nvmf_set_config", 00:21:18.285 "params": { 00:21:18.285 "discovery_filter": "match_any", 00:21:18.285 "admin_cmd_passthru": { 00:21:18.285 "identify_ctrlr": false 00:21:18.285 }, 00:21:18.285 "dhchap_digests": [ 00:21:18.285 "sha256", 00:21:18.285 "sha384", 00:21:18.285 "sha512" 00:21:18.285 ], 00:21:18.285 "dhchap_dhgroups": [ 00:21:18.285 "null", 00:21:18.285 "ffdhe2048", 00:21:18.285 "ffdhe3072", 00:21:18.285 "ffdhe4096", 00:21:18.285 "ffdhe6144", 00:21:18.285 "ffdhe8192" 00:21:18.285 ] 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_set_max_subsystems", 00:21:18.285 "params": { 00:21:18.285 "max_subsystems": 1024 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_set_crdt", 00:21:18.285 "params": { 00:21:18.285 "crdt1": 0, 00:21:18.285 "crdt2": 0, 00:21:18.285 "crdt3": 0 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_create_transport", 00:21:18.285 "params": { 00:21:18.285 "trtype": "TCP", 00:21:18.285 "max_queue_depth": 128, 00:21:18.285 "max_io_qpairs_per_ctrlr": 127, 00:21:18.285 "in_capsule_data_size": 4096, 00:21:18.285 "max_io_size": 131072, 00:21:18.285 "io_unit_size": 131072, 00:21:18.285 "max_aq_depth": 128, 00:21:18.285 "num_shared_buffers": 511, 00:21:18.285 "buf_cache_size": 4294967295, 00:21:18.285 "dif_insert_or_strip": false, 00:21:18.285 "zcopy": false, 00:21:18.285 "c2h_success": false, 00:21:18.285 "sock_priority": 0, 00:21:18.285 "abort_timeout_sec": 1, 00:21:18.285 "ack_timeout": 0, 00:21:18.285 "data_wr_pool_size": 0 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_create_subsystem", 00:21:18.285 "params": { 00:21:18.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.285 "allow_any_host": false, 00:21:18.285 "serial_number": "SPDK00000000000001", 00:21:18.285 "model_number": "SPDK bdev Controller", 00:21:18.285 "max_namespaces": 10, 00:21:18.285 "min_cntlid": 1, 00:21:18.285 "max_cntlid": 65519, 00:21:18.285 "ana_reporting": false 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_subsystem_add_host", 00:21:18.285 "params": { 00:21:18.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.285 "host": "nqn.2016-06.io.spdk:host1", 00:21:18.285 "psk": "key0" 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_subsystem_add_ns", 00:21:18.285 "params": { 00:21:18.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.285 "namespace": { 00:21:18.285 "nsid": 1, 00:21:18.285 "bdev_name": "malloc0", 00:21:18.285 "nguid": "34AF1093BF064D538F1CD313C839958B", 00:21:18.285 "uuid": "34af1093-bf06-4d53-8f1c-d313c839958b", 00:21:18.285 "no_auto_visible": false 00:21:18.285 } 00:21:18.285 } 00:21:18.285 }, 00:21:18.285 { 00:21:18.285 "method": "nvmf_subsystem_add_listener", 00:21:18.285 "params": { 00:21:18.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.285 "listen_address": { 00:21:18.285 "trtype": "TCP", 00:21:18.285 "adrfam": "IPv4", 00:21:18.285 "traddr": "10.0.0.2", 00:21:18.285 "trsvcid": "4420" 00:21:18.285 }, 00:21:18.285 "secure_channel": true 00:21:18.285 } 00:21:18.285 } 00:21:18.285 ] 00:21:18.285 } 00:21:18.285 ] 00:21:18.285 }' 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=954326 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 954326 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 954326 ']' 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.285 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.285 [2024-11-20 12:35:23.899367] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:18.286 [2024-11-20 12:35:23.899410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.286 [2024-11-20 12:35:23.970967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.286 [2024-11-20 12:35:24.008747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.286 [2024-11-20 12:35:24.008779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.286 [2024-11-20 12:35:24.008786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.286 [2024-11-20 12:35:24.008791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.286 [2024-11-20 12:35:24.008796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.286 [2024-11-20 12:35:24.009378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.566 [2024-11-20 12:35:24.219332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.566 [2024-11-20 12:35:24.251368] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.566 [2024-11-20 12:35:24.251588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=954560 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 954560 /var/tmp/bdevperf.sock 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 954560 ']' 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.141 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:19.141 "subsystems": [ 00:21:19.141 { 00:21:19.141 "subsystem": "keyring", 00:21:19.141 "config": [ 00:21:19.141 { 00:21:19.141 "method": "keyring_file_add_key", 00:21:19.141 "params": { 00:21:19.141 "name": "key0", 00:21:19.141 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:19.141 } 00:21:19.141 } 00:21:19.141 ] 00:21:19.141 }, 00:21:19.141 { 00:21:19.141 "subsystem": "iobuf", 00:21:19.141 "config": [ 00:21:19.141 { 00:21:19.141 "method": "iobuf_set_options", 00:21:19.141 "params": { 00:21:19.141 "small_pool_count": 8192, 00:21:19.141 "large_pool_count": 1024, 00:21:19.141 "small_bufsize": 8192, 00:21:19.141 "large_bufsize": 135168, 00:21:19.141 "enable_numa": false 00:21:19.141 } 00:21:19.141 } 00:21:19.141 ] 00:21:19.141 }, 00:21:19.141 { 00:21:19.141 "subsystem": "sock", 00:21:19.141 "config": [ 00:21:19.141 { 00:21:19.141 "method": "sock_set_default_impl", 00:21:19.141 "params": { 00:21:19.141 "impl_name": "posix" 00:21:19.141 } 00:21:19.141 }, 00:21:19.141 { 00:21:19.141 "method": "sock_impl_set_options", 00:21:19.141 "params": { 00:21:19.141 "impl_name": "ssl", 00:21:19.141 "recv_buf_size": 4096, 00:21:19.141 "send_buf_size": 4096, 00:21:19.141 "enable_recv_pipe": true, 00:21:19.141 "enable_quickack": false, 00:21:19.141 "enable_placement_id": 0, 00:21:19.142 "enable_zerocopy_send_server": true, 00:21:19.142 "enable_zerocopy_send_client": false, 00:21:19.142 "zerocopy_threshold": 0, 00:21:19.142 "tls_version": 0, 00:21:19.142 "enable_ktls": false 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "sock_impl_set_options", 00:21:19.142 "params": { 00:21:19.142 "impl_name": "posix", 00:21:19.142 "recv_buf_size": 2097152, 00:21:19.142 "send_buf_size": 2097152, 00:21:19.142 "enable_recv_pipe": true, 00:21:19.142 "enable_quickack": false, 00:21:19.142 "enable_placement_id": 0, 00:21:19.142 "enable_zerocopy_send_server": true, 00:21:19.142 "enable_zerocopy_send_client": false, 00:21:19.142 "zerocopy_threshold": 0, 00:21:19.142 "tls_version": 0, 00:21:19.142 "enable_ktls": false 00:21:19.142 } 00:21:19.142 } 00:21:19.142 ] 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "subsystem": "vmd", 00:21:19.142 "config": [] 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "subsystem": "accel", 00:21:19.142 "config": [ 00:21:19.142 { 00:21:19.142 "method": "accel_set_options", 00:21:19.142 "params": { 00:21:19.142 "small_cache_size": 128, 00:21:19.142 "large_cache_size": 16, 00:21:19.142 "task_count": 2048, 00:21:19.142 "sequence_count": 2048, 00:21:19.142 "buf_count": 2048 00:21:19.142 } 00:21:19.142 } 00:21:19.142 ] 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "subsystem": "bdev", 00:21:19.142 "config": [ 00:21:19.142 { 00:21:19.142 "method": "bdev_set_options", 00:21:19.142 "params": { 00:21:19.142 "bdev_io_pool_size": 65535, 00:21:19.142 "bdev_io_cache_size": 256, 00:21:19.142 "bdev_auto_examine": true, 00:21:19.142 "iobuf_small_cache_size": 128, 00:21:19.142 "iobuf_large_cache_size": 16 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "bdev_raid_set_options", 00:21:19.142 "params": { 00:21:19.142 "process_window_size_kb": 1024, 00:21:19.142 "process_max_bandwidth_mb_sec": 0 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "bdev_iscsi_set_options", 00:21:19.142 "params": { 00:21:19.142 "timeout_sec": 30 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "bdev_nvme_set_options", 00:21:19.142 "params": { 00:21:19.142 "action_on_timeout": "none", 00:21:19.142 "timeout_us": 0, 00:21:19.142 "timeout_admin_us": 0, 00:21:19.142 "keep_alive_timeout_ms": 10000, 00:21:19.142 "arbitration_burst": 0, 00:21:19.142 "low_priority_weight": 0, 00:21:19.142 "medium_priority_weight": 0, 00:21:19.142 "high_priority_weight": 0, 00:21:19.142 "nvme_adminq_poll_period_us": 10000, 00:21:19.142 "nvme_ioq_poll_period_us": 0, 00:21:19.142 "io_queue_requests": 512, 00:21:19.142 "delay_cmd_submit": true, 00:21:19.142 "transport_retry_count": 4, 00:21:19.142 "bdev_retry_count": 3, 00:21:19.142 "transport_ack_timeout": 0, 00:21:19.142 "ctrlr_loss_timeout_sec": 0, 00:21:19.142 "reconnect_delay_sec": 0, 00:21:19.142 "fast_io_fail_timeout_sec": 0, 00:21:19.142 "disable_auto_failback": false, 00:21:19.142 "generate_uuids": false, 00:21:19.142 "transport_tos": 0, 00:21:19.142 "nvme_error_stat": false, 00:21:19.142 "rdma_srq_size": 0, 00:21:19.142 "io_path_stat": false, 00:21:19.142 "allow_accel_sequence": false, 00:21:19.142 "rdma_max_cq_size": 0, 00:21:19.142 "rdma_cm_event_timeout_ms": 0 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.142 , 00:21:19.142 "dhchap_digests": [ 00:21:19.142 "sha256", 00:21:19.142 "sha384", 00:21:19.142 "sha512" 00:21:19.142 ], 00:21:19.142 "dhchap_dhgroups": [ 00:21:19.142 "null", 00:21:19.142 "ffdhe2048", 00:21:19.142 "ffdhe3072", 00:21:19.142 "ffdhe4096", 00:21:19.142 "ffdhe6144", 00:21:19.142 "ffdhe8192" 00:21:19.142 ] 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "bdev_nvme_attach_controller", 00:21:19.142 "params": { 00:21:19.142 "name": "TLSTEST", 00:21:19.142 "trtype": "TCP", 00:21:19.142 "adrfam": "IPv4", 00:21:19.142 "traddr": "10.0.0.2", 00:21:19.142 "trsvcid": "4420", 00:21:19.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.142 "prchk_reftag": false, 00:21:19.142 "prchk_guard": false, 00:21:19.142 "ctrlr_loss_timeout_sec": 0, 00:21:19.142 "reconnect_delay_sec": 0, 00:21:19.142 "fast_io_fail_timeout_sec": 0, 00:21:19.142 "psk": "key0", 00:21:19.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.142 "hdgst": false, 00:21:19.142 "ddgst": false, 00:21:19.142 "multipath": "multipath" 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "bdev_nvme_set_hotplug", 00:21:19.142 "params": { 00:21:19.142 "period_us": 100000, 00:21:19.142 "enable": false 00:21:19.142 } 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "method": "bdev_wait_for_examine" 00:21:19.142 } 00:21:19.142 ] 00:21:19.142 }, 00:21:19.142 { 00:21:19.142 "subsystem": "nbd", 00:21:19.142 "config": [] 00:21:19.142 } 00:21:19.142 ] 00:21:19.142 }' 00:21:19.142 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.142 [2024-11-20 12:35:24.790354] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:19.142 [2024-11-20 12:35:24.790399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954560 ] 00:21:19.142 [2024-11-20 12:35:24.859487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.142 [2024-11-20 12:35:24.899103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.402 [2024-11-20 12:35:25.048552] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.970 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.970 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.970 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:19.970 Running I/O for 10 seconds... 00:21:22.286 6112.00 IOPS, 23.88 MiB/s [2024-11-20T11:35:28.988Z] 6148.00 IOPS, 24.02 MiB/s [2024-11-20T11:35:29.926Z] 6170.33 IOPS, 24.10 MiB/s [2024-11-20T11:35:30.863Z] 6160.50 IOPS, 24.06 MiB/s [2024-11-20T11:35:31.802Z] 6185.00 IOPS, 24.16 MiB/s [2024-11-20T11:35:32.739Z] 6208.33 IOPS, 24.25 MiB/s [2024-11-20T11:35:34.119Z] 6207.57 IOPS, 24.25 MiB/s [2024-11-20T11:35:35.055Z] 6218.25 IOPS, 24.29 MiB/s [2024-11-20T11:35:35.994Z] 6213.67 IOPS, 24.27 MiB/s [2024-11-20T11:35:35.994Z] 6229.70 IOPS, 24.33 MiB/s 00:21:30.230 Latency(us) 00:21:30.230 [2024-11-20T11:35:35.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.230 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:30.230 Verification LBA range: start 0x0 length 0x2000 00:21:30.230 TLSTESTn1 : 10.02 6232.55 24.35 0.00 0.00 20505.59 4349.21 19065.02 00:21:30.230 [2024-11-20T11:35:35.994Z] =================================================================================================================== 00:21:30.230 [2024-11-20T11:35:35.994Z] Total : 6232.55 24.35 0.00 0.00 20505.59 4349.21 19065.02 00:21:30.230 { 00:21:30.230 "results": [ 00:21:30.230 { 00:21:30.230 "job": "TLSTESTn1", 00:21:30.230 "core_mask": "0x4", 00:21:30.230 "workload": "verify", 00:21:30.230 "status": "finished", 00:21:30.230 "verify_range": { 00:21:30.230 "start": 0, 00:21:30.230 "length": 8192 00:21:30.230 }, 00:21:30.230 "queue_depth": 128, 00:21:30.230 "io_size": 4096, 00:21:30.230 "runtime": 10.015651, 00:21:30.230 "iops": 6232.545443126962, 00:21:30.230 "mibps": 24.345880637214695, 00:21:30.230 "io_failed": 0, 00:21:30.230 "io_timeout": 0, 00:21:30.230 "avg_latency_us": 20505.586272658824, 00:21:30.230 "min_latency_us": 4349.2072727272725, 00:21:30.230 "max_latency_us": 19065.01818181818 00:21:30.230 } 00:21:30.230 ], 00:21:30.230 "core_count": 1 00:21:30.230 } 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 954560 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 954560 ']' 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 954560 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 954560 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 954560' 00:21:30.230 killing process with pid 954560 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 954560 00:21:30.230 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.230 00:21:30.230 Latency(us) 00:21:30.230 [2024-11-20T11:35:35.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.230 [2024-11-20T11:35:35.994Z] =================================================================================================================== 00:21:30.230 [2024-11-20T11:35:35.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 954560 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 954326 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 954326 ']' 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 954326 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.230 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 954326 00:21:30.489 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 954326' 00:21:30.489 killing process with pid 954326 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 954326 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 954326 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=956465 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 956465 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 956465 ']' 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.489 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.489 [2024-11-20 12:35:36.215163] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:30.489 [2024-11-20 12:35:36.215205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.749 [2024-11-20 12:35:36.273536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.749 [2024-11-20 12:35:36.311752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.749 [2024-11-20 12:35:36.311785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.749 [2024-11-20 12:35:36.311791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.749 [2024-11-20 12:35:36.311796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.749 [2024-11-20 12:35:36.311801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.749 [2024-11-20 12:35:36.312386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.l7YMJyAa8x 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.l7YMJyAa8x 00:21:30.749 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:31.008 [2024-11-20 12:35:36.609886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.008 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:31.267 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:31.267 [2024-11-20 12:35:36.946742] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.267 [2024-11-20 12:35:36.946973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.267 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:31.526 malloc0 00:21:31.526 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:31.785 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:31.785 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=956747 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 956747 /var/tmp/bdevperf.sock 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 956747 ']' 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.044 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.044 [2024-11-20 12:35:37.685916] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:32.044 [2024-11-20 12:35:37.685962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956747 ] 00:21:32.044 [2024-11-20 12:35:37.756838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.044 [2024-11-20 12:35:37.793911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.303 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.303 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.303 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:32.562 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:32.562 [2024-11-20 12:35:38.227877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.562 nvme0n1 00:21:32.562 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.821 Running I/O for 1 seconds... 00:21:33.758 6123.00 IOPS, 23.92 MiB/s 00:21:33.758 Latency(us) 00:21:33.758 [2024-11-20T11:35:39.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.758 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.758 Verification LBA range: start 0x0 length 0x2000 00:21:33.758 nvme0n1 : 1.02 6135.49 23.97 0.00 0.00 20704.97 4319.42 17992.61 00:21:33.758 [2024-11-20T11:35:39.522Z] =================================================================================================================== 00:21:33.758 [2024-11-20T11:35:39.522Z] Total : 6135.49 23.97 0.00 0.00 20704.97 4319.42 17992.61 00:21:33.758 { 00:21:33.758 "results": [ 00:21:33.758 { 00:21:33.758 "job": "nvme0n1", 00:21:33.758 "core_mask": "0x2", 00:21:33.758 "workload": "verify", 00:21:33.758 "status": "finished", 00:21:33.758 "verify_range": { 00:21:33.758 "start": 0, 00:21:33.758 "length": 8192 00:21:33.758 }, 00:21:33.758 "queue_depth": 128, 00:21:33.758 "io_size": 4096, 00:21:33.758 "runtime": 1.018827, 00:21:33.758 "iops": 6135.48718280925, 00:21:33.758 "mibps": 23.966746807848633, 00:21:33.758 "io_failed": 0, 00:21:33.758 "io_timeout": 0, 00:21:33.758 "avg_latency_us": 20704.971732231934, 00:21:33.758 "min_latency_us": 4319.418181818181, 00:21:33.758 "max_latency_us": 17992.61090909091 00:21:33.758 } 00:21:33.758 ], 00:21:33.758 "core_count": 1 00:21:33.758 } 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 956747 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 956747 ']' 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 956747 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956747 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956747' 00:21:33.758 killing process with pid 956747 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 956747 00:21:33.758 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.758 00:21:33.758 Latency(us) 00:21:33.758 [2024-11-20T11:35:39.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.758 [2024-11-20T11:35:39.522Z] =================================================================================================================== 00:21:33.758 [2024-11-20T11:35:39.522Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.758 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 956747 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 956465 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 956465 ']' 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 956465 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956465 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956465' 00:21:34.018 killing process with pid 956465 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 956465 00:21:34.018 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 956465 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=957278 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 957278 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 957278 ']' 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.277 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.277 [2024-11-20 12:35:39.922323] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:34.277 [2024-11-20 12:35:39.922368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.277 [2024-11-20 12:35:39.994846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.535 [2024-11-20 12:35:40.039772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.535 [2024-11-20 12:35:40.039805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.535 [2024-11-20 12:35:40.039812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.535 [2024-11-20 12:35:40.039818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.535 [2024-11-20 12:35:40.039823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.535 [2024-11-20 12:35:40.040338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.103 [2024-11-20 12:35:40.767908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.103 malloc0 00:21:35.103 [2024-11-20 12:35:40.795940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.103 [2024-11-20 12:35:40.796155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=957341 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 957341 /var/tmp/bdevperf.sock 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 957341 ']' 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.103 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.362 [2024-11-20 12:35:40.871212] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:35.362 [2024-11-20 12:35:40.871249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957341 ] 00:21:35.362 [2024-11-20 12:35:40.940530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.362 [2024-11-20 12:35:40.980125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.362 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.362 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.362 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l7YMJyAa8x 00:21:35.621 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:35.880 [2024-11-20 12:35:41.398563] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.880 nvme0n1 00:21:35.880 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.880 Running I/O for 1 seconds... 00:21:37.259 6231.00 IOPS, 24.34 MiB/s 00:21:37.259 Latency(us) 00:21:37.259 [2024-11-20T11:35:43.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.259 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:37.259 Verification LBA range: start 0x0 length 0x2000 00:21:37.259 nvme0n1 : 1.02 6258.95 24.45 0.00 0.00 20293.06 4468.36 18350.08 00:21:37.259 [2024-11-20T11:35:43.023Z] =================================================================================================================== 00:21:37.259 [2024-11-20T11:35:43.023Z] Total : 6258.95 24.45 0.00 0.00 20293.06 4468.36 18350.08 00:21:37.259 { 00:21:37.259 "results": [ 00:21:37.259 { 00:21:37.259 "job": "nvme0n1", 00:21:37.259 "core_mask": "0x2", 00:21:37.259 "workload": "verify", 00:21:37.259 "status": "finished", 00:21:37.259 "verify_range": { 00:21:37.259 "start": 0, 00:21:37.259 "length": 8192 00:21:37.259 }, 00:21:37.259 "queue_depth": 128, 00:21:37.259 "io_size": 4096, 00:21:37.259 "runtime": 1.015985, 00:21:37.259 "iops": 6258.950673484353, 00:21:37.259 "mibps": 24.449026068298252, 00:21:37.259 "io_failed": 0, 00:21:37.259 "io_timeout": 0, 00:21:37.259 "avg_latency_us": 20293.058112624913, 00:21:37.259 "min_latency_us": 4468.363636363636, 00:21:37.259 "max_latency_us": 18350.08 00:21:37.259 } 00:21:37.259 ], 00:21:37.259 "core_count": 1 00:21:37.259 } 00:21:37.259 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:37.259 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.259 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.259 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.259 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:37.259 "subsystems": [ 00:21:37.259 { 00:21:37.259 "subsystem": "keyring", 00:21:37.259 "config": [ 00:21:37.259 { 00:21:37.259 "method": "keyring_file_add_key", 00:21:37.259 "params": { 00:21:37.259 "name": "key0", 00:21:37.259 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:37.259 } 00:21:37.259 } 00:21:37.259 ] 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "subsystem": "iobuf", 00:21:37.259 "config": [ 00:21:37.259 { 00:21:37.259 "method": "iobuf_set_options", 00:21:37.259 "params": { 00:21:37.259 "small_pool_count": 8192, 00:21:37.259 "large_pool_count": 1024, 00:21:37.259 "small_bufsize": 8192, 00:21:37.259 "large_bufsize": 135168, 00:21:37.259 "enable_numa": false 00:21:37.259 } 00:21:37.259 } 00:21:37.259 ] 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "subsystem": "sock", 00:21:37.259 "config": [ 00:21:37.259 { 00:21:37.259 "method": "sock_set_default_impl", 00:21:37.259 "params": { 00:21:37.259 "impl_name": "posix" 00:21:37.259 } 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "method": "sock_impl_set_options", 00:21:37.259 "params": { 00:21:37.259 "impl_name": "ssl", 00:21:37.259 "recv_buf_size": 4096, 00:21:37.259 "send_buf_size": 4096, 00:21:37.259 "enable_recv_pipe": true, 00:21:37.259 "enable_quickack": false, 00:21:37.259 "enable_placement_id": 0, 00:21:37.259 "enable_zerocopy_send_server": true, 00:21:37.259 "enable_zerocopy_send_client": false, 00:21:37.259 "zerocopy_threshold": 0, 00:21:37.259 "tls_version": 0, 00:21:37.259 "enable_ktls": false 00:21:37.259 } 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "method": "sock_impl_set_options", 00:21:37.259 "params": { 00:21:37.259 "impl_name": "posix", 00:21:37.259 "recv_buf_size": 2097152, 00:21:37.259 "send_buf_size": 2097152, 00:21:37.259 "enable_recv_pipe": true, 00:21:37.259 "enable_quickack": false, 00:21:37.259 "enable_placement_id": 0, 00:21:37.259 "enable_zerocopy_send_server": true, 00:21:37.259 "enable_zerocopy_send_client": false, 00:21:37.259 "zerocopy_threshold": 0, 00:21:37.259 "tls_version": 0, 00:21:37.259 "enable_ktls": false 00:21:37.259 } 00:21:37.259 } 00:21:37.259 ] 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "subsystem": "vmd", 00:21:37.259 "config": [] 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "subsystem": "accel", 00:21:37.259 "config": [ 00:21:37.259 { 00:21:37.259 "method": "accel_set_options", 00:21:37.259 "params": { 00:21:37.259 "small_cache_size": 128, 00:21:37.259 "large_cache_size": 16, 00:21:37.259 "task_count": 2048, 00:21:37.259 "sequence_count": 2048, 00:21:37.259 "buf_count": 2048 00:21:37.259 } 00:21:37.259 } 00:21:37.259 ] 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "subsystem": "bdev", 00:21:37.259 "config": [ 00:21:37.259 { 00:21:37.259 "method": "bdev_set_options", 00:21:37.259 "params": { 00:21:37.259 "bdev_io_pool_size": 65535, 00:21:37.259 "bdev_io_cache_size": 256, 00:21:37.259 "bdev_auto_examine": true, 00:21:37.259 "iobuf_small_cache_size": 128, 00:21:37.259 "iobuf_large_cache_size": 16 00:21:37.259 } 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "method": "bdev_raid_set_options", 00:21:37.259 "params": { 00:21:37.259 "process_window_size_kb": 1024, 00:21:37.259 "process_max_bandwidth_mb_sec": 0 00:21:37.259 } 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "method": "bdev_iscsi_set_options", 00:21:37.259 "params": { 00:21:37.259 "timeout_sec": 30 00:21:37.259 } 00:21:37.259 }, 00:21:37.259 { 00:21:37.259 "method": "bdev_nvme_set_options", 00:21:37.259 "params": { 00:21:37.259 "action_on_timeout": "none", 00:21:37.259 "timeout_us": 0, 00:21:37.259 "timeout_admin_us": 0, 00:21:37.259 "keep_alive_timeout_ms": 10000, 00:21:37.259 "arbitration_burst": 0, 00:21:37.259 "low_priority_weight": 0, 00:21:37.259 "medium_priority_weight": 0, 00:21:37.259 "high_priority_weight": 0, 00:21:37.259 "nvme_adminq_poll_period_us": 10000, 00:21:37.259 "nvme_ioq_poll_period_us": 0, 00:21:37.259 "io_queue_requests": 0, 00:21:37.259 "delay_cmd_submit": true, 00:21:37.259 "transport_retry_count": 4, 00:21:37.259 "bdev_retry_count": 3, 00:21:37.259 "transport_ack_timeout": 0, 00:21:37.259 "ctrlr_loss_timeout_sec": 0, 00:21:37.259 "reconnect_delay_sec": 0, 00:21:37.259 "fast_io_fail_timeout_sec": 0, 00:21:37.259 "disable_auto_failback": false, 00:21:37.259 "generate_uuids": false, 00:21:37.259 "transport_tos": 0, 00:21:37.259 "nvme_error_stat": false, 00:21:37.259 "rdma_srq_size": 0, 00:21:37.259 "io_path_stat": false, 00:21:37.259 "allow_accel_sequence": false, 00:21:37.259 "rdma_max_cq_size": 0, 00:21:37.259 "rdma_cm_event_timeout_ms": 0, 00:21:37.259 "dhchap_digests": [ 00:21:37.259 "sha256", 00:21:37.259 "sha384", 00:21:37.259 "sha512" 00:21:37.259 ], 00:21:37.259 "dhchap_dhgroups": [ 00:21:37.259 "null", 00:21:37.259 "ffdhe2048", 00:21:37.260 "ffdhe3072", 00:21:37.260 "ffdhe4096", 00:21:37.260 "ffdhe6144", 00:21:37.260 "ffdhe8192" 00:21:37.260 ] 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "bdev_nvme_set_hotplug", 00:21:37.260 "params": { 00:21:37.260 "period_us": 100000, 00:21:37.260 "enable": false 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "bdev_malloc_create", 00:21:37.260 "params": { 00:21:37.260 "name": "malloc0", 00:21:37.260 "num_blocks": 8192, 00:21:37.260 "block_size": 4096, 00:21:37.260 "physical_block_size": 4096, 00:21:37.260 "uuid": "3a288912-75dd-4b49-9a27-f28a193a188c", 00:21:37.260 "optimal_io_boundary": 0, 00:21:37.260 "md_size": 0, 00:21:37.260 "dif_type": 0, 00:21:37.260 "dif_is_head_of_md": false, 00:21:37.260 "dif_pi_format": 0 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "bdev_wait_for_examine" 00:21:37.260 } 00:21:37.260 ] 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "subsystem": "nbd", 00:21:37.260 "config": [] 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "subsystem": "scheduler", 00:21:37.260 "config": [ 00:21:37.260 { 00:21:37.260 "method": "framework_set_scheduler", 00:21:37.260 "params": { 00:21:37.260 "name": "static" 00:21:37.260 } 00:21:37.260 } 00:21:37.260 ] 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "subsystem": "nvmf", 00:21:37.260 "config": [ 00:21:37.260 { 00:21:37.260 "method": "nvmf_set_config", 00:21:37.260 "params": { 00:21:37.260 "discovery_filter": "match_any", 00:21:37.260 "admin_cmd_passthru": { 00:21:37.260 "identify_ctrlr": false 00:21:37.260 }, 00:21:37.260 "dhchap_digests": [ 00:21:37.260 "sha256", 00:21:37.260 "sha384", 00:21:37.260 "sha512" 00:21:37.260 ], 00:21:37.260 "dhchap_dhgroups": [ 00:21:37.260 "null", 00:21:37.260 "ffdhe2048", 00:21:37.260 "ffdhe3072", 00:21:37.260 "ffdhe4096", 00:21:37.260 "ffdhe6144", 00:21:37.260 "ffdhe8192" 00:21:37.260 ] 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_set_max_subsystems", 00:21:37.260 "params": { 00:21:37.260 "max_subsystems": 1024 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_set_crdt", 00:21:37.260 "params": { 00:21:37.260 "crdt1": 0, 00:21:37.260 "crdt2": 0, 00:21:37.260 "crdt3": 0 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_create_transport", 00:21:37.260 "params": { 00:21:37.260 "trtype": "TCP", 00:21:37.260 "max_queue_depth": 128, 00:21:37.260 "max_io_qpairs_per_ctrlr": 127, 00:21:37.260 "in_capsule_data_size": 4096, 00:21:37.260 "max_io_size": 131072, 00:21:37.260 "io_unit_size": 131072, 00:21:37.260 "max_aq_depth": 128, 00:21:37.260 "num_shared_buffers": 511, 00:21:37.260 "buf_cache_size": 4294967295, 00:21:37.260 "dif_insert_or_strip": false, 00:21:37.260 "zcopy": false, 00:21:37.260 "c2h_success": false, 00:21:37.260 "sock_priority": 0, 00:21:37.260 "abort_timeout_sec": 1, 00:21:37.260 "ack_timeout": 0, 00:21:37.260 "data_wr_pool_size": 0 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_create_subsystem", 00:21:37.260 "params": { 00:21:37.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.260 "allow_any_host": false, 00:21:37.260 "serial_number": "00000000000000000000", 00:21:37.260 "model_number": "SPDK bdev Controller", 00:21:37.260 "max_namespaces": 32, 00:21:37.260 "min_cntlid": 1, 00:21:37.260 "max_cntlid": 65519, 00:21:37.260 "ana_reporting": false 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_subsystem_add_host", 00:21:37.260 "params": { 00:21:37.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.260 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.260 "psk": "key0" 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_subsystem_add_ns", 00:21:37.260 "params": { 00:21:37.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.260 "namespace": { 00:21:37.260 "nsid": 1, 00:21:37.260 "bdev_name": "malloc0", 00:21:37.260 "nguid": "3A28891275DD4B499A27F28A193A188C", 00:21:37.260 "uuid": "3a288912-75dd-4b49-9a27-f28a193a188c", 00:21:37.260 "no_auto_visible": false 00:21:37.260 } 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "nvmf_subsystem_add_listener", 00:21:37.260 "params": { 00:21:37.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.260 "listen_address": { 00:21:37.260 "trtype": "TCP", 00:21:37.260 "adrfam": "IPv4", 00:21:37.260 "traddr": "10.0.0.2", 00:21:37.260 "trsvcid": "4420" 00:21:37.260 }, 00:21:37.260 "secure_channel": false, 00:21:37.260 "sock_impl": "ssl" 00:21:37.260 } 00:21:37.260 } 00:21:37.260 ] 00:21:37.260 } 00:21:37.260 ] 00:21:37.260 }' 00:21:37.260 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:37.260 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:37.260 "subsystems": [ 00:21:37.260 { 00:21:37.260 "subsystem": "keyring", 00:21:37.260 "config": [ 00:21:37.260 { 00:21:37.260 "method": "keyring_file_add_key", 00:21:37.260 "params": { 00:21:37.260 "name": "key0", 00:21:37.260 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:37.260 } 00:21:37.260 } 00:21:37.260 ] 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "subsystem": "iobuf", 00:21:37.260 "config": [ 00:21:37.260 { 00:21:37.260 "method": "iobuf_set_options", 00:21:37.260 "params": { 00:21:37.260 "small_pool_count": 8192, 00:21:37.260 "large_pool_count": 1024, 00:21:37.260 "small_bufsize": 8192, 00:21:37.260 "large_bufsize": 135168, 00:21:37.260 "enable_numa": false 00:21:37.260 } 00:21:37.260 } 00:21:37.260 ] 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "subsystem": "sock", 00:21:37.260 "config": [ 00:21:37.260 { 00:21:37.260 "method": "sock_set_default_impl", 00:21:37.260 "params": { 00:21:37.260 "impl_name": "posix" 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "sock_impl_set_options", 00:21:37.260 "params": { 00:21:37.260 "impl_name": "ssl", 00:21:37.260 "recv_buf_size": 4096, 00:21:37.260 "send_buf_size": 4096, 00:21:37.260 "enable_recv_pipe": true, 00:21:37.260 "enable_quickack": false, 00:21:37.260 "enable_placement_id": 0, 00:21:37.260 "enable_zerocopy_send_server": true, 00:21:37.260 "enable_zerocopy_send_client": false, 00:21:37.260 "zerocopy_threshold": 0, 00:21:37.260 "tls_version": 0, 00:21:37.260 "enable_ktls": false 00:21:37.260 } 00:21:37.260 }, 00:21:37.260 { 00:21:37.260 "method": "sock_impl_set_options", 00:21:37.260 "params": { 00:21:37.260 "impl_name": "posix", 00:21:37.260 "recv_buf_size": 2097152, 00:21:37.260 "send_buf_size": 2097152, 00:21:37.260 "enable_recv_pipe": true, 00:21:37.260 "enable_quickack": false, 00:21:37.260 "enable_placement_id": 0, 00:21:37.260 "enable_zerocopy_send_server": true, 00:21:37.260 "enable_zerocopy_send_client": false, 00:21:37.260 "zerocopy_threshold": 0, 00:21:37.260 "tls_version": 0, 00:21:37.260 "enable_ktls": false 00:21:37.260 } 00:21:37.260 } 00:21:37.260 ] 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "subsystem": "vmd", 00:21:37.261 "config": [] 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "subsystem": "accel", 00:21:37.261 "config": [ 00:21:37.261 { 00:21:37.261 "method": "accel_set_options", 00:21:37.261 "params": { 00:21:37.261 "small_cache_size": 128, 00:21:37.261 "large_cache_size": 16, 00:21:37.261 "task_count": 2048, 00:21:37.261 "sequence_count": 2048, 00:21:37.261 "buf_count": 2048 00:21:37.261 } 00:21:37.261 } 00:21:37.261 ] 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "subsystem": "bdev", 00:21:37.261 "config": [ 00:21:37.261 { 00:21:37.261 "method": "bdev_set_options", 00:21:37.261 "params": { 00:21:37.261 "bdev_io_pool_size": 65535, 00:21:37.261 "bdev_io_cache_size": 256, 00:21:37.261 "bdev_auto_examine": true, 00:21:37.261 "iobuf_small_cache_size": 128, 00:21:37.261 "iobuf_large_cache_size": 16 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_raid_set_options", 00:21:37.261 "params": { 00:21:37.261 "process_window_size_kb": 1024, 00:21:37.261 "process_max_bandwidth_mb_sec": 0 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_iscsi_set_options", 00:21:37.261 "params": { 00:21:37.261 "timeout_sec": 30 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_nvme_set_options", 00:21:37.261 "params": { 00:21:37.261 "action_on_timeout": "none", 00:21:37.261 "timeout_us": 0, 00:21:37.261 "timeout_admin_us": 0, 00:21:37.261 "keep_alive_timeout_ms": 10000, 00:21:37.261 "arbitration_burst": 0, 00:21:37.261 "low_priority_weight": 0, 00:21:37.261 "medium_priority_weight": 0, 00:21:37.261 "high_priority_weight": 0, 00:21:37.261 "nvme_adminq_poll_period_us": 10000, 00:21:37.261 "nvme_ioq_poll_period_us": 0, 00:21:37.261 "io_queue_requests": 512, 00:21:37.261 "delay_cmd_submit": true, 00:21:37.261 "transport_retry_count": 4, 00:21:37.261 "bdev_retry_count": 3, 00:21:37.261 "transport_ack_timeout": 0, 00:21:37.261 "ctrlr_loss_timeout_sec": 0, 00:21:37.261 "reconnect_delay_sec": 0, 00:21:37.261 "fast_io_fail_timeout_sec": 0, 00:21:37.261 "disable_auto_failback": false, 00:21:37.261 "generate_uuids": false, 00:21:37.261 "transport_tos": 0, 00:21:37.261 "nvme_error_stat": false, 00:21:37.261 "rdma_srq_size": 0, 00:21:37.261 "io_path_stat": false, 00:21:37.261 "allow_accel_sequence": false, 00:21:37.261 "rdma_max_cq_size": 0, 00:21:37.261 "rdma_cm_event_timeout_ms": 0, 00:21:37.261 "dhchap_digests": [ 00:21:37.261 "sha256", 00:21:37.261 "sha384", 00:21:37.261 "sha512" 00:21:37.261 ], 00:21:37.261 "dhchap_dhgroups": [ 00:21:37.261 "null", 00:21:37.261 "ffdhe2048", 00:21:37.261 "ffdhe3072", 00:21:37.261 "ffdhe4096", 00:21:37.261 "ffdhe6144", 00:21:37.261 "ffdhe8192" 00:21:37.261 ] 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_nvme_attach_controller", 00:21:37.261 "params": { 00:21:37.261 "name": "nvme0", 00:21:37.261 "trtype": "TCP", 00:21:37.261 "adrfam": "IPv4", 00:21:37.261 "traddr": "10.0.0.2", 00:21:37.261 "trsvcid": "4420", 00:21:37.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.261 "prchk_reftag": false, 00:21:37.261 "prchk_guard": false, 00:21:37.261 "ctrlr_loss_timeout_sec": 0, 00:21:37.261 "reconnect_delay_sec": 0, 00:21:37.261 "fast_io_fail_timeout_sec": 0, 00:21:37.261 "psk": "key0", 00:21:37.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.261 "hdgst": false, 00:21:37.261 "ddgst": false, 00:21:37.261 "multipath": "multipath" 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_nvme_set_hotplug", 00:21:37.261 "params": { 00:21:37.261 "period_us": 100000, 00:21:37.261 "enable": false 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_enable_histogram", 00:21:37.261 "params": { 00:21:37.261 "name": "nvme0n1", 00:21:37.261 "enable": true 00:21:37.261 } 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "method": "bdev_wait_for_examine" 00:21:37.261 } 00:21:37.261 ] 00:21:37.261 }, 00:21:37.261 { 00:21:37.261 "subsystem": "nbd", 00:21:37.261 "config": [] 00:21:37.261 } 00:21:37.261 ] 00:21:37.261 }' 00:21:37.261 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 957341 00:21:37.261 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 957341 ']' 00:21:37.261 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 957341 00:21:37.261 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.261 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.261 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957341 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957341' 00:21:37.521 killing process with pid 957341 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 957341 00:21:37.521 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.521 00:21:37.521 Latency(us) 00:21:37.521 [2024-11-20T11:35:43.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.521 [2024-11-20T11:35:43.285Z] =================================================================================================================== 00:21:37.521 [2024-11-20T11:35:43.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 957341 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 957278 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 957278 ']' 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 957278 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957278 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957278' 00:21:37.521 killing process with pid 957278 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 957278 00:21:37.521 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 957278 00:21:37.780 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:37.780 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.780 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.780 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:37.780 "subsystems": [ 00:21:37.780 { 00:21:37.780 "subsystem": "keyring", 00:21:37.780 "config": [ 00:21:37.780 { 00:21:37.780 "method": "keyring_file_add_key", 00:21:37.780 "params": { 00:21:37.780 "name": "key0", 00:21:37.780 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:37.780 } 00:21:37.780 } 00:21:37.780 ] 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "subsystem": "iobuf", 00:21:37.780 "config": [ 00:21:37.780 { 00:21:37.780 "method": "iobuf_set_options", 00:21:37.780 "params": { 00:21:37.780 "small_pool_count": 8192, 00:21:37.780 "large_pool_count": 1024, 00:21:37.780 "small_bufsize": 8192, 00:21:37.780 "large_bufsize": 135168, 00:21:37.780 "enable_numa": false 00:21:37.780 } 00:21:37.780 } 00:21:37.780 ] 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "subsystem": "sock", 00:21:37.780 "config": [ 00:21:37.780 { 00:21:37.780 "method": "sock_set_default_impl", 00:21:37.780 "params": { 00:21:37.780 "impl_name": "posix" 00:21:37.780 } 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "method": "sock_impl_set_options", 00:21:37.780 "params": { 00:21:37.780 "impl_name": "ssl", 00:21:37.780 "recv_buf_size": 4096, 00:21:37.780 "send_buf_size": 4096, 00:21:37.780 "enable_recv_pipe": true, 00:21:37.780 "enable_quickack": false, 00:21:37.780 "enable_placement_id": 0, 00:21:37.780 "enable_zerocopy_send_server": true, 00:21:37.780 "enable_zerocopy_send_client": false, 00:21:37.780 "zerocopy_threshold": 0, 00:21:37.780 "tls_version": 0, 00:21:37.780 "enable_ktls": false 00:21:37.780 } 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "method": "sock_impl_set_options", 00:21:37.780 "params": { 00:21:37.780 "impl_name": "posix", 00:21:37.780 "recv_buf_size": 2097152, 00:21:37.780 "send_buf_size": 2097152, 00:21:37.780 "enable_recv_pipe": true, 00:21:37.780 "enable_quickack": false, 00:21:37.780 "enable_placement_id": 0, 00:21:37.780 "enable_zerocopy_send_server": true, 00:21:37.780 "enable_zerocopy_send_client": false, 00:21:37.780 "zerocopy_threshold": 0, 00:21:37.780 "tls_version": 0, 00:21:37.780 "enable_ktls": false 00:21:37.780 } 00:21:37.780 } 00:21:37.780 ] 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "subsystem": "vmd", 00:21:37.780 "config": [] 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "subsystem": "accel", 00:21:37.780 "config": [ 00:21:37.780 { 00:21:37.780 "method": "accel_set_options", 00:21:37.780 "params": { 00:21:37.780 "small_cache_size": 128, 00:21:37.780 "large_cache_size": 16, 00:21:37.780 "task_count": 2048, 00:21:37.780 "sequence_count": 2048, 00:21:37.780 "buf_count": 2048 00:21:37.780 } 00:21:37.780 } 00:21:37.780 ] 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "subsystem": "bdev", 00:21:37.780 "config": [ 00:21:37.780 { 00:21:37.780 "method": "bdev_set_options", 00:21:37.780 "params": { 00:21:37.780 "bdev_io_pool_size": 65535, 00:21:37.780 "bdev_io_cache_size": 256, 00:21:37.780 "bdev_auto_examine": true, 00:21:37.780 "iobuf_small_cache_size": 128, 00:21:37.780 "iobuf_large_cache_size": 16 00:21:37.780 } 00:21:37.780 }, 00:21:37.780 { 00:21:37.780 "method": "bdev_raid_set_options", 00:21:37.780 "params": { 00:21:37.780 "process_window_size_kb": 1024, 00:21:37.780 "process_max_bandwidth_mb_sec": 0 00:21:37.780 } 00:21:37.780 }, 00:21:37.780 { 00:21:37.781 "method": "bdev_iscsi_set_options", 00:21:37.781 "params": { 00:21:37.781 "timeout_sec": 30 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "bdev_nvme_set_options", 00:21:37.781 "params": { 00:21:37.781 "action_on_timeout": "none", 00:21:37.781 "timeout_us": 0, 00:21:37.781 "timeout_admin_us": 0, 00:21:37.781 "keep_alive_timeout_ms": 10000, 00:21:37.781 "arbitration_burst": 0, 00:21:37.781 "low_priority_weight": 0, 00:21:37.781 "medium_priority_weight": 0, 00:21:37.781 "high_priority_weight": 0, 00:21:37.781 "nvme_adminq_poll_period_us": 10000, 00:21:37.781 "nvme_ioq_poll_period_us": 0, 00:21:37.781 "io_queue_requests": 0, 00:21:37.781 "delay_cmd_submit": true, 00:21:37.781 "transport_retry_count": 4, 00:21:37.781 "bdev_retry_count": 3, 00:21:37.781 "transport_ack_timeout": 0, 00:21:37.781 "ctrlr_loss_timeout_sec": 0, 00:21:37.781 "reconnect_delay_sec": 0, 00:21:37.781 "fast_io_fail_timeout_sec": 0, 00:21:37.781 "disable_auto_failback": false, 00:21:37.781 "generate_uuids": false, 00:21:37.781 "transport_tos": 0, 00:21:37.781 "nvme_error_stat": false, 00:21:37.781 "rdma_srq_size": 0, 00:21:37.781 "io_path_stat": false, 00:21:37.781 "allow_accel_sequence": false, 00:21:37.781 "rdma_max_cq_size": 0, 00:21:37.781 "rdma_cm_event_timeout_ms": 0, 00:21:37.781 "dhchap_digests": [ 00:21:37.781 "sha256", 00:21:37.781 "sha384", 00:21:37.781 "sha512" 00:21:37.781 ], 00:21:37.781 "dhchap_dhgroups": [ 00:21:37.781 "null", 00:21:37.781 "ffdhe2048", 00:21:37.781 "ffdhe3072", 00:21:37.781 "ffdhe4096", 00:21:37.781 "ffdhe6144", 00:21:37.781 "ffdhe8192" 00:21:37.781 ] 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "bdev_nvme_set_hotplug", 00:21:37.781 "params": { 00:21:37.781 "period_us": 100000, 00:21:37.781 "enable": false 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "bdev_malloc_create", 00:21:37.781 "params": { 00:21:37.781 "name": "malloc0", 00:21:37.781 "num_blocks": 8192, 00:21:37.781 "block_size": 4096, 00:21:37.781 "physical_block_size": 4096, 00:21:37.781 "uuid": "3a288912-75dd-4b49-9a27-f28a193a188c", 00:21:37.781 "optimal_io_boundary": 0, 00:21:37.781 "md_size": 0, 00:21:37.781 "dif_type": 0, 00:21:37.781 "dif_is_head_of_md": false, 00:21:37.781 "dif_pi_format": 0 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "bdev_wait_for_examine" 00:21:37.781 } 00:21:37.781 ] 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "subsystem": "nbd", 00:21:37.781 "config": [] 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "subsystem": "scheduler", 00:21:37.781 "config": [ 00:21:37.781 { 00:21:37.781 "method": "framework_set_scheduler", 00:21:37.781 "params": { 00:21:37.781 "name": "static" 00:21:37.781 } 00:21:37.781 } 00:21:37.781 ] 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "subsystem": "nvmf", 00:21:37.781 "config": [ 00:21:37.781 { 00:21:37.781 "method": "nvmf_set_config", 00:21:37.781 "params": { 00:21:37.781 "discovery_filter": "match_any", 00:21:37.781 "admin_cmd_passthru": { 00:21:37.781 "identify_ctrlr": false 00:21:37.781 }, 00:21:37.781 "dhchap_digests": [ 00:21:37.781 "sha256", 00:21:37.781 "sha384", 00:21:37.781 "sha512" 00:21:37.781 ], 00:21:37.781 "dhchap_dhgroups": [ 00:21:37.781 "null", 00:21:37.781 "ffdhe2048", 00:21:37.781 "ffdhe3072", 00:21:37.781 "ffdhe4096", 00:21:37.781 "ffdhe6144", 00:21:37.781 "ffdhe8192" 00:21:37.781 ] 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_set_max_subsystems", 00:21:37.781 "params": { 00:21:37.781 "max_subsystems": 1024 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_set_crdt", 00:21:37.781 "params": { 00:21:37.781 "crdt1": 0, 00:21:37.781 "crdt2": 0, 00:21:37.781 "crdt3": 0 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_create_transport", 00:21:37.781 "params": { 00:21:37.781 "trtype": "TCP", 00:21:37.781 "max_queue_depth": 128, 00:21:37.781 "max_io_qpairs_per_ctrlr": 127, 00:21:37.781 "in_capsule_data_size": 4096, 00:21:37.781 "max_io_size": 131072, 00:21:37.781 "io_unit_size": 131072, 00:21:37.781 "max_aq_depth": 128, 00:21:37.781 "num_shared_buffers": 511, 00:21:37.781 "buf_cache_size": 4294967295, 00:21:37.781 "dif_insert_or_strip": false, 00:21:37.781 "zcopy": false, 00:21:37.781 "c2h_success": false, 00:21:37.781 "sock_priority": 0, 00:21:37.781 "abort_timeout_sec": 1, 00:21:37.781 "ack_timeout": 0, 00:21:37.781 "data_wr_pool_size": 0 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_create_subsystem", 00:21:37.781 "params": { 00:21:37.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.781 "allow_any_host": false, 00:21:37.781 "serial_number": "00000000000000000000", 00:21:37.781 "model_number": "SPDK bdev Controller", 00:21:37.781 "max_namespaces": 32, 00:21:37.781 "min_cntlid": 1, 00:21:37.781 "max_cntlid": 65519, 00:21:37.781 "ana_reporting": false 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_subsystem_add_host", 00:21:37.781 "params": { 00:21:37.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.781 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.781 "psk": "key0" 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_subsystem_add_ns", 00:21:37.781 "params": { 00:21:37.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.781 "namespace": { 00:21:37.781 "nsid": 1, 00:21:37.781 "bdev_name": "malloc0", 00:21:37.781 "nguid": "3A28891275DD4B499A27F28A193A188C", 00:21:37.781 "uuid": "3a288912-75dd-4b49-9a27-f28a193a188c", 00:21:37.781 "no_auto_visible": false 00:21:37.781 } 00:21:37.781 } 00:21:37.781 }, 00:21:37.781 { 00:21:37.781 "method": "nvmf_subsystem_add_listener", 00:21:37.781 "params": { 00:21:37.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.781 "listen_address": { 00:21:37.781 "trtype": "TCP", 00:21:37.781 "adrfam": "IPv4", 00:21:37.781 "traddr": "10.0.0.2", 00:21:37.781 "trsvcid": "4420" 00:21:37.781 }, 00:21:37.781 "secure_channel": false, 00:21:37.781 "sock_impl": "ssl" 00:21:37.781 } 00:21:37.781 } 00:21:37.781 ] 00:21:37.781 } 00:21:37.781 ] 00:21:37.781 }' 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=957847 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 957847 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 957847 ']' 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.781 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.781 [2024-11-20 12:35:43.461403] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:37.781 [2024-11-20 12:35:43.461468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.781 [2024-11-20 12:35:43.536545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.041 [2024-11-20 12:35:43.570609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.041 [2024-11-20 12:35:43.570640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.041 [2024-11-20 12:35:43.570647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.041 [2024-11-20 12:35:43.570653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.041 [2024-11-20 12:35:43.570658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.041 [2024-11-20 12:35:43.571140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.041 [2024-11-20 12:35:43.780590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.300 [2024-11-20 12:35:43.812622] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.300 [2024-11-20 12:35:43.812844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.560 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.560 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:38.560 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.560 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.560 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=958124 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 958124 /var/tmp/bdevperf.sock 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 958124 ']' 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:38.561 "subsystems": [ 00:21:38.561 { 00:21:38.561 "subsystem": "keyring", 00:21:38.561 "config": [ 00:21:38.561 { 00:21:38.561 "method": "keyring_file_add_key", 00:21:38.561 "params": { 00:21:38.561 "name": "key0", 00:21:38.561 "path": "/tmp/tmp.l7YMJyAa8x" 00:21:38.561 } 00:21:38.561 } 00:21:38.561 ] 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "subsystem": "iobuf", 00:21:38.561 "config": [ 00:21:38.561 { 00:21:38.561 "method": "iobuf_set_options", 00:21:38.561 "params": { 00:21:38.561 "small_pool_count": 8192, 00:21:38.561 "large_pool_count": 1024, 00:21:38.561 "small_bufsize": 8192, 00:21:38.561 "large_bufsize": 135168, 00:21:38.561 "enable_numa": false 00:21:38.561 } 00:21:38.561 } 00:21:38.561 ] 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "subsystem": "sock", 00:21:38.561 "config": [ 00:21:38.561 { 00:21:38.561 "method": "sock_set_default_impl", 00:21:38.561 "params": { 00:21:38.561 "impl_name": "posix" 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "sock_impl_set_options", 00:21:38.561 "params": { 00:21:38.561 "impl_name": "ssl", 00:21:38.561 "recv_buf_size": 4096, 00:21:38.561 "send_buf_size": 4096, 00:21:38.561 "enable_recv_pipe": true, 00:21:38.561 "enable_quickack": false, 00:21:38.561 "enable_placement_id": 0, 00:21:38.561 "enable_zerocopy_send_server": true, 00:21:38.561 "enable_zerocopy_send_client": false, 00:21:38.561 "zerocopy_threshold": 0, 00:21:38.561 "tls_version": 0, 00:21:38.561 "enable_ktls": false 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "sock_impl_set_options", 00:21:38.561 "params": { 00:21:38.561 "impl_name": "posix", 00:21:38.561 "recv_buf_size": 2097152, 00:21:38.561 "send_buf_size": 2097152, 00:21:38.561 "enable_recv_pipe": true, 00:21:38.561 "enable_quickack": false, 00:21:38.561 "enable_placement_id": 0, 00:21:38.561 "enable_zerocopy_send_server": true, 00:21:38.561 "enable_zerocopy_send_client": false, 00:21:38.561 "zerocopy_threshold": 0, 00:21:38.561 "tls_version": 0, 00:21:38.561 "enable_ktls": false 00:21:38.561 } 00:21:38.561 } 00:21:38.561 ] 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "subsystem": "vmd", 00:21:38.561 "config": [] 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "subsystem": "accel", 00:21:38.561 "config": [ 00:21:38.561 { 00:21:38.561 "method": "accel_set_options", 00:21:38.561 "params": { 00:21:38.561 "small_cache_size": 128, 00:21:38.561 "large_cache_size": 16, 00:21:38.561 "task_count": 2048, 00:21:38.561 "sequence_count": 2048, 00:21:38.561 "buf_count": 2048 00:21:38.561 } 00:21:38.561 } 00:21:38.561 ] 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "subsystem": "bdev", 00:21:38.561 "config": [ 00:21:38.561 { 00:21:38.561 "method": "bdev_set_options", 00:21:38.561 "params": { 00:21:38.561 "bdev_io_pool_size": 65535, 00:21:38.561 "bdev_io_cache_size": 256, 00:21:38.561 "bdev_auto_examine": true, 00:21:38.561 "iobuf_small_cache_size": 128, 00:21:38.561 "iobuf_large_cache_size": 16 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_raid_set_options", 00:21:38.561 "params": { 00:21:38.561 "process_window_size_kb": 1024, 00:21:38.561 "process_max_bandwidth_mb_sec": 0 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_iscsi_set_options", 00:21:38.561 "params": { 00:21:38.561 "timeout_sec": 30 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_nvme_set_options", 00:21:38.561 "params": { 00:21:38.561 "action_on_timeout": "none", 00:21:38.561 "timeout_us": 0, 00:21:38.561 "timeout_admin_us": 0, 00:21:38.561 "keep_alive_timeout_ms": 10000, 00:21:38.561 "arbitration_burst": 0, 00:21:38.561 "low_priority_weight": 0, 00:21:38.561 "medium_priority_weight": 0, 00:21:38.561 "high_priority_weight": 0, 00:21:38.561 "nvme_adminq_poll_period_us": 10000, 00:21:38.561 "nvme_ioq_poll_period_us": 0, 00:21:38.561 "io_queue_requests": 512, 00:21:38.561 "delay_cmd_submit": true, 00:21:38.561 "transport_retry_count": 4, 00:21:38.561 "bdev_retry_count": 3, 00:21:38.561 "transport_ack_timeout": 0, 00:21:38.561 "ctrlr_loss_timeout_sec": 0, 00:21:38.561 "reconnect_delay_sec": 0, 00:21:38.561 "fast_io_fail_timeout_sec": 0, 00:21:38.561 "disable_auto_failback": false, 00:21:38.561 "generate_uuids": false, 00:21:38.561 "transport_tos": 0, 00:21:38.561 "nvme_error_stat": false, 00:21:38.561 "rdma_srq_size": 0, 00:21:38.561 "io_path_stat": false, 00:21:38.561 "allow_accel_sequence": false, 00:21:38.561 "rdma_max_cq_size": 0, 00:21:38.561 "rdma_cm_event_timeout_ms": 0, 00:21:38.561 "dhchap_digests": [ 00:21:38.561 "sha256", 00:21:38.561 "sha384", 00:21:38.561 "sha512" 00:21:38.561 ], 00:21:38.561 "dhchap_dhgroups": [ 00:21:38.561 "null", 00:21:38.561 "ffdhe2048", 00:21:38.561 "ffdhe3072", 00:21:38.561 "ffdhe4096", 00:21:38.561 "ffdhe6144", 00:21:38.561 "ffdhe8192" 00:21:38.561 ] 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_nvme_attach_controller", 00:21:38.561 "params": { 00:21:38.561 "name": "nvme0", 00:21:38.561 "trtype": "TCP", 00:21:38.561 "adrfam": "IPv4", 00:21:38.561 "traddr": "10.0.0.2", 00:21:38.561 "trsvcid": "4420", 00:21:38.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.561 "prchk_reftag": false, 00:21:38.561 "prchk_guard": false, 00:21:38.561 "ctrlr_loss_timeout_sec": 0, 00:21:38.561 "reconnect_delay_sec": 0, 00:21:38.561 "fast_io_fail_timeout_sec": 0, 00:21:38.561 "psk": "key0", 00:21:38.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.561 "hdgst": false, 00:21:38.561 "ddgst": false, 00:21:38.561 "multipath": "multipath" 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_nvme_set_hotplug", 00:21:38.561 "params": { 00:21:38.561 "period_us": 100000, 00:21:38.561 "enable": false 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_enable_histogram", 00:21:38.561 "params": { 00:21:38.561 "name": "nvme0n1", 00:21:38.561 "enable": true 00:21:38.561 } 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "method": "bdev_wait_for_examine" 00:21:38.561 } 00:21:38.561 ] 00:21:38.561 }, 00:21:38.561 { 00:21:38.561 "subsystem": "nbd", 00:21:38.561 "config": [] 00:21:38.561 } 00:21:38.561 ] 00:21:38.561 }' 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.561 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.820 [2024-11-20 12:35:44.357100] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:38.820 [2024-11-20 12:35:44.357145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958124 ] 00:21:38.820 [2024-11-20 12:35:44.429989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.820 [2024-11-20 12:35:44.469183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.078 [2024-11-20 12:35:44.620483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.647 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.647 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:39.647 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.647 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:39.647 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.647 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.906 Running I/O for 1 seconds... 00:21:40.843 6123.00 IOPS, 23.92 MiB/s 00:21:40.843 Latency(us) 00:21:40.843 [2024-11-20T11:35:46.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.843 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:40.843 Verification LBA range: start 0x0 length 0x2000 00:21:40.843 nvme0n1 : 1.01 6173.09 24.11 0.00 0.00 20589.73 5808.87 20614.05 00:21:40.843 [2024-11-20T11:35:46.607Z] =================================================================================================================== 00:21:40.843 [2024-11-20T11:35:46.607Z] Total : 6173.09 24.11 0.00 0.00 20589.73 5808.87 20614.05 00:21:40.843 { 00:21:40.843 "results": [ 00:21:40.843 { 00:21:40.843 "job": "nvme0n1", 00:21:40.843 "core_mask": "0x2", 00:21:40.843 "workload": "verify", 00:21:40.843 "status": "finished", 00:21:40.843 "verify_range": { 00:21:40.843 "start": 0, 00:21:40.843 "length": 8192 00:21:40.843 }, 00:21:40.843 "queue_depth": 128, 00:21:40.843 "io_size": 4096, 00:21:40.843 "runtime": 1.012621, 00:21:40.843 "iops": 6173.089438200472, 00:21:40.843 "mibps": 24.113630617970593, 00:21:40.843 "io_failed": 0, 00:21:40.843 "io_timeout": 0, 00:21:40.843 "avg_latency_us": 20589.72783758235, 00:21:40.843 "min_latency_us": 5808.872727272727, 00:21:40.843 "max_latency_us": 20614.05090909091 00:21:40.843 } 00:21:40.843 ], 00:21:40.843 "core_count": 1 00:21:40.843 } 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:40.843 nvmf_trace.0 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 958124 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 958124 ']' 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 958124 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.843 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 958124 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 958124' 00:21:41.102 killing process with pid 958124 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 958124 00:21:41.102 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.102 00:21:41.102 Latency(us) 00:21:41.102 [2024-11-20T11:35:46.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.102 [2024-11-20T11:35:46.866Z] =================================================================================================================== 00:21:41.102 [2024-11-20T11:35:46.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 958124 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:41.102 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.103 rmmod nvme_tcp 00:21:41.103 rmmod nvme_fabrics 00:21:41.103 rmmod nvme_keyring 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 957847 ']' 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 957847 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 957847 ']' 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 957847 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.103 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957847 00:21:41.362 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.362 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.362 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957847' 00:21:41.362 killing process with pid 957847 00:21:41.362 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 957847 00:21:41.362 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 957847 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.362 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UrTPxHHyUE /tmp/tmp.cbPrW1Wn5b /tmp/tmp.l7YMJyAa8x 00:21:43.899 00:21:43.899 real 1m20.640s 00:21:43.899 user 2m4.747s 00:21:43.899 sys 0m26.534s 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.899 ************************************ 00:21:43.899 END TEST nvmf_tls 00:21:43.899 ************************************ 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.899 ************************************ 00:21:43.899 START TEST nvmf_fips 00:21:43.899 ************************************ 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:43.899 * Looking for test storage... 00:21:43.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:43.899 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.900 --rc genhtml_branch_coverage=1 00:21:43.900 --rc genhtml_function_coverage=1 00:21:43.900 --rc genhtml_legend=1 00:21:43.900 --rc geninfo_all_blocks=1 00:21:43.900 --rc geninfo_unexecuted_blocks=1 00:21:43.900 00:21:43.900 ' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.900 --rc genhtml_branch_coverage=1 00:21:43.900 --rc genhtml_function_coverage=1 00:21:43.900 --rc genhtml_legend=1 00:21:43.900 --rc geninfo_all_blocks=1 00:21:43.900 --rc geninfo_unexecuted_blocks=1 00:21:43.900 00:21:43.900 ' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.900 --rc genhtml_branch_coverage=1 00:21:43.900 --rc genhtml_function_coverage=1 00:21:43.900 --rc genhtml_legend=1 00:21:43.900 --rc geninfo_all_blocks=1 00:21:43.900 --rc geninfo_unexecuted_blocks=1 00:21:43.900 00:21:43.900 ' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.900 --rc genhtml_branch_coverage=1 00:21:43.900 --rc genhtml_function_coverage=1 00:21:43.900 --rc genhtml_legend=1 00:21:43.900 --rc geninfo_all_blocks=1 00:21:43.900 --rc geninfo_unexecuted_blocks=1 00:21:43.900 00:21:43.900 ' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:43.900 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:43.901 Error setting digest 00:21:43.901 40A23B850D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:43.901 40A23B850D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.901 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:21:50.472 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:21:50.472 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:21:50.472 Found net devices under 0000:1a:00.0: cvl_0_0 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:21:50.472 Found net devices under 0000:1a:00.1: cvl_0_1 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:21:50.472 00:21:50.472 --- 10.0.0.2 ping statistics --- 00:21:50.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.472 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:21:50.472 00:21:50.472 --- 10.0.0.1 ping statistics --- 00:21:50.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.472 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=962208 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 962208 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 962208 ']' 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.472 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.472 [2024-11-20 12:35:55.826029] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:50.472 [2024-11-20 12:35:55.826072] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.472 [2024-11-20 12:35:55.884247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.473 [2024-11-20 12:35:55.922999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.473 [2024-11-20 12:35:55.923032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.473 [2024-11-20 12:35:55.923038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.473 [2024-11-20 12:35:55.923044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.473 [2024-11-20 12:35:55.923049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.473 [2024-11-20 12:35:55.923633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2Td 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2Td 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2Td 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2Td 00:21:50.473 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:50.755 [2024-11-20 12:35:56.237854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.755 [2024-11-20 12:35:56.253855] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.755 [2024-11-20 12:35:56.254076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.755 malloc0 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=962476 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 962476 /var/tmp/bdevperf.sock 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 962476 ']' 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 [2024-11-20 12:35:56.380025] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:50.755 [2024-11-20 12:35:56.380069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962476 ] 00:21:50.755 [2024-11-20 12:35:56.449511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.755 [2024-11-20 12:35:56.489066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.690 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.690 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:51.690 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2Td 00:21:51.690 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.948 [2024-11-20 12:35:57.516964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.948 TLSTESTn1 00:21:51.948 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.948 Running I/O for 10 seconds... 00:21:54.263 6203.00 IOPS, 24.23 MiB/s [2024-11-20T11:36:00.964Z] 6227.00 IOPS, 24.32 MiB/s [2024-11-20T11:36:01.900Z] 6217.67 IOPS, 24.29 MiB/s [2024-11-20T11:36:02.837Z] 6238.75 IOPS, 24.37 MiB/s [2024-11-20T11:36:03.776Z] 6221.00 IOPS, 24.30 MiB/s [2024-11-20T11:36:05.154Z] 6234.50 IOPS, 24.35 MiB/s [2024-11-20T11:36:05.723Z] 6236.00 IOPS, 24.36 MiB/s [2024-11-20T11:36:07.101Z] 6250.12 IOPS, 24.41 MiB/s [2024-11-20T11:36:08.038Z] 6257.78 IOPS, 24.44 MiB/s [2024-11-20T11:36:08.038Z] 6208.30 IOPS, 24.25 MiB/s 00:22:02.274 Latency(us) 00:22:02.274 [2024-11-20T11:36:08.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.274 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:02.274 Verification LBA range: start 0x0 length 0x2000 00:22:02.274 TLSTESTn1 : 10.02 6211.39 24.26 0.00 0.00 20576.82 5719.51 26571.87 00:22:02.274 [2024-11-20T11:36:08.038Z] =================================================================================================================== 00:22:02.275 [2024-11-20T11:36:08.039Z] Total : 6211.39 24.26 0.00 0.00 20576.82 5719.51 26571.87 00:22:02.275 { 00:22:02.275 "results": [ 00:22:02.275 { 00:22:02.275 "job": "TLSTESTn1", 00:22:02.275 "core_mask": "0x4", 00:22:02.275 "workload": "verify", 00:22:02.275 "status": "finished", 00:22:02.275 "verify_range": { 00:22:02.275 "start": 0, 00:22:02.275 "length": 8192 00:22:02.275 }, 00:22:02.275 "queue_depth": 128, 00:22:02.275 "io_size": 4096, 00:22:02.275 "runtime": 10.015477, 00:22:02.275 "iops": 6211.386636902067, 00:22:02.275 "mibps": 24.2632290503987, 00:22:02.275 "io_failed": 0, 00:22:02.275 "io_timeout": 0, 00:22:02.275 "avg_latency_us": 20576.819812774913, 00:22:02.275 "min_latency_us": 5719.505454545455, 00:22:02.275 "max_latency_us": 26571.86909090909 00:22:02.275 } 00:22:02.275 ], 00:22:02.275 "core_count": 1 00:22:02.275 } 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:02.275 nvmf_trace.0 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 962476 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 962476 ']' 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 962476 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962476 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962476' 00:22:02.275 killing process with pid 962476 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 962476 00:22:02.275 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.275 00:22:02.275 Latency(us) 00:22:02.275 [2024-11-20T11:36:08.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.275 [2024-11-20T11:36:08.039Z] =================================================================================================================== 00:22:02.275 [2024-11-20T11:36:08.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.275 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 962476 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.534 rmmod nvme_tcp 00:22:02.534 rmmod nvme_fabrics 00:22:02.534 rmmod nvme_keyring 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 962208 ']' 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 962208 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 962208 ']' 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 962208 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962208 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962208' 00:22:02.534 killing process with pid 962208 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 962208 00:22:02.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 962208 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.697 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.697 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2Td 00:22:04.697 00:22:04.697 real 0m21.226s 00:22:04.697 user 0m23.817s 00:22:04.697 sys 0m8.527s 00:22:04.697 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.697 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:04.697 ************************************ 00:22:04.697 END TEST nvmf_fips 00:22:04.697 ************************************ 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:04.957 ************************************ 00:22:04.957 START TEST nvmf_control_msg_list 00:22:04.957 ************************************ 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:04.957 * Looking for test storage... 00:22:04.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.957 --rc genhtml_branch_coverage=1 00:22:04.957 --rc genhtml_function_coverage=1 00:22:04.957 --rc genhtml_legend=1 00:22:04.957 --rc geninfo_all_blocks=1 00:22:04.957 --rc geninfo_unexecuted_blocks=1 00:22:04.957 00:22:04.957 ' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.957 --rc genhtml_branch_coverage=1 00:22:04.957 --rc genhtml_function_coverage=1 00:22:04.957 --rc genhtml_legend=1 00:22:04.957 --rc geninfo_all_blocks=1 00:22:04.957 --rc geninfo_unexecuted_blocks=1 00:22:04.957 00:22:04.957 ' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.957 --rc genhtml_branch_coverage=1 00:22:04.957 --rc genhtml_function_coverage=1 00:22:04.957 --rc genhtml_legend=1 00:22:04.957 --rc geninfo_all_blocks=1 00:22:04.957 --rc geninfo_unexecuted_blocks=1 00:22:04.957 00:22:04.957 ' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.957 --rc genhtml_branch_coverage=1 00:22:04.957 --rc genhtml_function_coverage=1 00:22:04.957 --rc genhtml_legend=1 00:22:04.957 --rc geninfo_all_blocks=1 00:22:04.957 --rc geninfo_unexecuted_blocks=1 00:22:04.957 00:22:04.957 ' 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.957 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.958 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.216 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.217 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.217 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.217 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:22:11.889 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:22:11.889 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.889 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:22:11.890 Found net devices under 0000:1a:00.0: cvl_0_0 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:22:11.890 Found net devices under 0000:1a:00.1: cvl_0_1 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:22:11.890 00:22:11.890 --- 10.0.0.2 ping statistics --- 00:22:11.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.890 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:22:11.890 00:22:11.890 --- 10.0.0.1 ping statistics --- 00:22:11.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.890 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=968633 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 968633 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 968633 ']' 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.890 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.890 [2024-11-20 12:36:16.906778] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:11.890 [2024-11-20 12:36:16.906816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.890 [2024-11-20 12:36:16.965578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.890 [2024-11-20 12:36:17.001671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.890 [2024-11-20 12:36:17.001706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.890 [2024-11-20 12:36:17.001713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.890 [2024-11-20 12:36:17.001718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.890 [2024-11-20 12:36:17.001723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.890 [2024-11-20 12:36:17.002289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.890 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.890 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:11.890 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.890 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.890 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.891 [2024-11-20 12:36:17.147697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.891 Malloc0 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.891 [2024-11-20 12:36:17.187890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=968840 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=968841 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=968843 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 968840 00:22:11.891 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:11.891 [2024-11-20 12:36:17.256291] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:11.891 [2024-11-20 12:36:17.266199] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:11.891 [2024-11-20 12:36:17.276185] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:12.828 Initializing NVMe Controllers 00:22:12.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:12.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:12.828 Initialization complete. Launching workers. 00:22:12.828 ======================================================== 00:22:12.828 Latency(us) 00:22:12.828 Device Information : IOPS MiB/s Average min max 00:22:12.828 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 7438.00 29.05 134.23 107.93 383.45 00:22:12.828 ======================================================== 00:22:12.828 Total : 7438.00 29.05 134.23 107.93 383.45 00:22:12.828 00:22:12.828 Initializing NVMe Controllers 00:22:12.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:12.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:12.828 Initialization complete. Launching workers. 00:22:12.828 ======================================================== 00:22:12.828 Latency(us) 00:22:12.828 Device Information : IOPS MiB/s Average min max 00:22:12.828 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 7386.00 28.85 135.18 123.63 391.02 00:22:12.828 ======================================================== 00:22:12.828 Total : 7386.00 28.85 135.18 123.63 391.02 00:22:12.828 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 968841 00:22:12.828 Initializing NVMe Controllers 00:22:12.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:12.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:12.828 Initialization complete. Launching workers. 00:22:12.828 ======================================================== 00:22:12.828 Latency(us) 00:22:12.828 Device Information : IOPS MiB/s Average min max 00:22:12.828 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40889.75 40604.98 40995.99 00:22:12.828 ======================================================== 00:22:12.828 Total : 25.00 0.10 40889.75 40604.98 40995.99 00:22:12.828 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 968843 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.828 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.828 rmmod nvme_tcp 00:22:12.828 rmmod nvme_fabrics 00:22:12.828 rmmod nvme_keyring 00:22:13.087 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.087 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:13.087 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:13.087 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 968633 ']' 00:22:13.087 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 968633 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 968633 ']' 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 968633 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 968633 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 968633' 00:22:13.088 killing process with pid 968633 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 968633 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 968633 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.088 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.623 00:22:15.623 real 0m10.394s 00:22:15.623 user 0m6.941s 00:22:15.623 sys 0m5.667s 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.623 ************************************ 00:22:15.623 END TEST nvmf_control_msg_list 00:22:15.623 ************************************ 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.623 ************************************ 00:22:15.623 START TEST nvmf_wait_for_buf 00:22:15.623 ************************************ 00:22:15.623 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:15.623 * Looking for test storage... 00:22:15.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.623 --rc genhtml_branch_coverage=1 00:22:15.623 --rc genhtml_function_coverage=1 00:22:15.623 --rc genhtml_legend=1 00:22:15.623 --rc geninfo_all_blocks=1 00:22:15.623 --rc geninfo_unexecuted_blocks=1 00:22:15.623 00:22:15.623 ' 00:22:15.623 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.623 --rc genhtml_branch_coverage=1 00:22:15.623 --rc genhtml_function_coverage=1 00:22:15.623 --rc genhtml_legend=1 00:22:15.624 --rc geninfo_all_blocks=1 00:22:15.624 --rc geninfo_unexecuted_blocks=1 00:22:15.624 00:22:15.624 ' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.624 --rc genhtml_branch_coverage=1 00:22:15.624 --rc genhtml_function_coverage=1 00:22:15.624 --rc genhtml_legend=1 00:22:15.624 --rc geninfo_all_blocks=1 00:22:15.624 --rc geninfo_unexecuted_blocks=1 00:22:15.624 00:22:15.624 ' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.624 --rc genhtml_branch_coverage=1 00:22:15.624 --rc genhtml_function_coverage=1 00:22:15.624 --rc genhtml_legend=1 00:22:15.624 --rc geninfo_all_blocks=1 00:22:15.624 --rc geninfo_unexecuted_blocks=1 00:22:15.624 00:22:15.624 ' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.624 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.195 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:22:22.195 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:22:22.195 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:22:22.195 Found net devices under 0000:1a:00.0: cvl_0_0 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.195 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:22:22.196 Found net devices under 0000:1a:00.1: cvl_0_1 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:22:22.196 00:22:22.196 --- 10.0.0.2 ping statistics --- 00:22:22.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.196 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:22:22.196 00:22:22.196 --- 10.0.0.1 ping statistics --- 00:22:22.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.196 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=972673 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 972673 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 972673 ']' 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 [2024-11-20 12:36:27.378805] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:22.196 [2024-11-20 12:36:27.378845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.196 [2024-11-20 12:36:27.457968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.196 [2024-11-20 12:36:27.495653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.196 [2024-11-20 12:36:27.495686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.196 [2024-11-20 12:36:27.495693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.196 [2024-11-20 12:36:27.495698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.196 [2024-11-20 12:36:27.495704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.196 [2024-11-20 12:36:27.496259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.196 Malloc0 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:22.196 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.197 [2024-11-20 12:36:27.655085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:22.197 [2024-11-20 12:36:27.683289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.197 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:22.197 [2024-11-20 12:36:27.763484] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:23.572 Initializing NVMe Controllers 00:22:23.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:23.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:23.572 Initialization complete. Launching workers. 00:22:23.572 ======================================================== 00:22:23.572 Latency(us) 00:22:23.572 Device Information : IOPS MiB/s Average min max 00:22:23.573 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 76.00 9.50 54654.88 8004.41 191533.05 00:22:23.573 ======================================================== 00:22:23.573 Total : 76.00 9.50 54654.88 8004.41 191533.05 00:22:23.573 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1190 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1190 -eq 0 ]] 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.573 rmmod nvme_tcp 00:22:23.573 rmmod nvme_fabrics 00:22:23.573 rmmod nvme_keyring 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 972673 ']' 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 972673 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 972673 ']' 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 972673 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972673 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972673' 00:22:23.573 killing process with pid 972673 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 972673 00:22:23.573 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 972673 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.835 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.740 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.740 00:22:25.740 real 0m10.531s 00:22:25.740 user 0m3.919s 00:22:25.740 sys 0m5.024s 00:22:25.740 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.740 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:25.740 ************************************ 00:22:25.740 END TEST nvmf_wait_for_buf 00:22:25.740 ************************************ 00:22:25.999 12:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:25.999 12:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:25.999 12:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:25.999 12:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:25.999 12:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.999 12:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.573 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:22:32.574 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:22:32.574 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:22:32.574 Found net devices under 0000:1a:00.0: cvl_0_0 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:22:32.574 Found net devices under 0000:1a:00.1: cvl_0_1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.574 ************************************ 00:22:32.574 START TEST nvmf_perf_adq 00:22:32.574 ************************************ 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:32.574 * Looking for test storage... 00:22:32.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.574 --rc genhtml_branch_coverage=1 00:22:32.574 --rc genhtml_function_coverage=1 00:22:32.574 --rc genhtml_legend=1 00:22:32.574 --rc geninfo_all_blocks=1 00:22:32.574 --rc geninfo_unexecuted_blocks=1 00:22:32.574 00:22:32.574 ' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.574 --rc genhtml_branch_coverage=1 00:22:32.574 --rc genhtml_function_coverage=1 00:22:32.574 --rc genhtml_legend=1 00:22:32.574 --rc geninfo_all_blocks=1 00:22:32.574 --rc geninfo_unexecuted_blocks=1 00:22:32.574 00:22:32.574 ' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.574 --rc genhtml_branch_coverage=1 00:22:32.574 --rc genhtml_function_coverage=1 00:22:32.574 --rc genhtml_legend=1 00:22:32.574 --rc geninfo_all_blocks=1 00:22:32.574 --rc geninfo_unexecuted_blocks=1 00:22:32.574 00:22:32.574 ' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.574 --rc genhtml_branch_coverage=1 00:22:32.574 --rc genhtml_function_coverage=1 00:22:32.574 --rc genhtml_legend=1 00:22:32.574 --rc geninfo_all_blocks=1 00:22:32.574 --rc geninfo_unexecuted_blocks=1 00:22:32.574 00:22:32.574 ' 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.574 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.575 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:22:37.844 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:22:37.844 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:22:37.844 Found net devices under 0000:1a:00.0: cvl_0_0 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:22:37.844 Found net devices under 0000:1a:00.1: cvl_0_1 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:37.844 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.845 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:37.845 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:37.845 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:37.845 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:37.845 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:39.221 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:41.124 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:22:46.395 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:22:46.395 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.395 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:22:46.396 Found net devices under 0000:1a:00.0: cvl_0_0 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:22:46.396 Found net devices under 0000:1a:00.1: cvl_0_1 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.396 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:22:46.396 00:22:46.396 --- 10.0.0.2 ping statistics --- 00:22:46.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.396 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:22:46.396 00:22:46.396 --- 10.0.0.1 ping statistics --- 00:22:46.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.396 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=981670 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 981670 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 981670 ']' 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.396 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.655 [2024-11-20 12:36:52.194698] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:46.655 [2024-11-20 12:36:52.194748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.655 [2024-11-20 12:36:52.271989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.655 [2024-11-20 12:36:52.313487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.655 [2024-11-20 12:36:52.313521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.655 [2024-11-20 12:36:52.313528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.655 [2024-11-20 12:36:52.313534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.655 [2024-11-20 12:36:52.313539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.655 [2024-11-20 12:36:52.315003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.655 [2024-11-20 12:36:52.315115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.655 [2024-11-20 12:36:52.315204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.655 [2024-11-20 12:36:52.315206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.655 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 [2024-11-20 12:36:52.514475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 Malloc1 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.914 [2024-11-20 12:36:52.574548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=981725 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:46.914 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:49.444 "tick_rate": 2200000000, 00:22:49.444 "poll_groups": [ 00:22:49.444 { 00:22:49.444 "name": "nvmf_tgt_poll_group_000", 00:22:49.444 "admin_qpairs": 1, 00:22:49.444 "io_qpairs": 1, 00:22:49.444 "current_admin_qpairs": 1, 00:22:49.444 "current_io_qpairs": 1, 00:22:49.444 "pending_bdev_io": 0, 00:22:49.444 "completed_nvme_io": 20628, 00:22:49.444 "transports": [ 00:22:49.444 { 00:22:49.444 "trtype": "TCP" 00:22:49.444 } 00:22:49.444 ] 00:22:49.444 }, 00:22:49.444 { 00:22:49.444 "name": "nvmf_tgt_poll_group_001", 00:22:49.444 "admin_qpairs": 0, 00:22:49.444 "io_qpairs": 1, 00:22:49.444 "current_admin_qpairs": 0, 00:22:49.444 "current_io_qpairs": 1, 00:22:49.444 "pending_bdev_io": 0, 00:22:49.444 "completed_nvme_io": 20932, 00:22:49.444 "transports": [ 00:22:49.444 { 00:22:49.444 "trtype": "TCP" 00:22:49.444 } 00:22:49.444 ] 00:22:49.444 }, 00:22:49.444 { 00:22:49.444 "name": "nvmf_tgt_poll_group_002", 00:22:49.444 "admin_qpairs": 0, 00:22:49.444 "io_qpairs": 1, 00:22:49.444 "current_admin_qpairs": 0, 00:22:49.444 "current_io_qpairs": 1, 00:22:49.444 "pending_bdev_io": 0, 00:22:49.444 "completed_nvme_io": 21422, 00:22:49.444 "transports": [ 00:22:49.444 { 00:22:49.444 "trtype": "TCP" 00:22:49.444 } 00:22:49.444 ] 00:22:49.444 }, 00:22:49.444 { 00:22:49.444 "name": "nvmf_tgt_poll_group_003", 00:22:49.444 "admin_qpairs": 0, 00:22:49.444 "io_qpairs": 1, 00:22:49.444 "current_admin_qpairs": 0, 00:22:49.444 "current_io_qpairs": 1, 00:22:49.444 "pending_bdev_io": 0, 00:22:49.444 "completed_nvme_io": 21428, 00:22:49.444 "transports": [ 00:22:49.444 { 00:22:49.444 "trtype": "TCP" 00:22:49.444 } 00:22:49.444 ] 00:22:49.444 } 00:22:49.444 ] 00:22:49.444 }' 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:49.444 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 981725 00:22:57.564 Initializing NVMe Controllers 00:22:57.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:57.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:57.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:57.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:57.564 Initialization complete. Launching workers. 00:22:57.564 ======================================================== 00:22:57.564 Latency(us) 00:22:57.564 Device Information : IOPS MiB/s Average min max 00:22:57.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11298.30 44.13 5664.53 1760.76 9030.69 00:22:57.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11100.20 43.36 5765.55 2072.31 10529.55 00:22:57.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11334.20 44.27 5647.83 1917.44 10257.92 00:22:57.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11029.20 43.08 5804.44 2178.65 13002.93 00:22:57.564 ======================================================== 00:22:57.564 Total : 44761.90 174.85 5719.83 1760.76 13002.93 00:22:57.564 00:22:57.564 [2024-11-20 12:37:02.731510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8960 is same with the state(6) to be set 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.564 rmmod nvme_tcp 00:22:57.564 rmmod nvme_fabrics 00:22:57.564 rmmod nvme_keyring 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 981670 ']' 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 981670 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 981670 ']' 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 981670 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981670 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981670' 00:22:57.564 killing process with pid 981670 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 981670 00:22:57.564 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 981670 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.564 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.469 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.469 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:59.469 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:59.469 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:00.847 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:02.751 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:08.024 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:23:08.025 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:23:08.025 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:23:08.025 Found net devices under 0000:1a:00.0: cvl_0_0 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:23:08.025 Found net devices under 0000:1a:00.1: cvl_0_1 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.025 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:08.026 00:23:08.026 --- 10.0.0.2 ping statistics --- 00:23:08.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.026 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:23:08.026 00:23:08.026 --- 10.0.0.1 ping statistics --- 00:23:08.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.026 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:08.026 net.core.busy_poll = 1 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:08.026 net.core.busy_read = 1 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:08.026 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=985770 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 985770 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 985770 ']' 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.285 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:08.285 [2024-11-20 12:37:14.045236] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:08.285 [2024-11-20 12:37:14.045284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.543 [2024-11-20 12:37:14.122350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.543 [2024-11-20 12:37:14.162465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.543 [2024-11-20 12:37:14.162500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.543 [2024-11-20 12:37:14.162506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.543 [2024-11-20 12:37:14.162511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.543 [2024-11-20 12:37:14.162516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.544 [2024-11-20 12:37:14.164163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.544 [2024-11-20 12:37:14.164281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.544 [2024-11-20 12:37:14.164389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.544 [2024-11-20 12:37:14.164391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.110 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.110 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:09.110 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.110 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.110 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 [2024-11-20 12:37:15.024462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 Malloc1 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.370 [2024-11-20 12:37:15.088192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=986056 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:09.370 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:11.905 "tick_rate": 2200000000, 00:23:11.905 "poll_groups": [ 00:23:11.905 { 00:23:11.905 "name": "nvmf_tgt_poll_group_000", 00:23:11.905 "admin_qpairs": 1, 00:23:11.905 "io_qpairs": 2, 00:23:11.905 "current_admin_qpairs": 1, 00:23:11.905 "current_io_qpairs": 2, 00:23:11.905 "pending_bdev_io": 0, 00:23:11.905 "completed_nvme_io": 28686, 00:23:11.905 "transports": [ 00:23:11.905 { 00:23:11.905 "trtype": "TCP" 00:23:11.905 } 00:23:11.905 ] 00:23:11.905 }, 00:23:11.905 { 00:23:11.905 "name": "nvmf_tgt_poll_group_001", 00:23:11.905 "admin_qpairs": 0, 00:23:11.905 "io_qpairs": 2, 00:23:11.905 "current_admin_qpairs": 0, 00:23:11.905 "current_io_qpairs": 2, 00:23:11.905 "pending_bdev_io": 0, 00:23:11.905 "completed_nvme_io": 29679, 00:23:11.905 "transports": [ 00:23:11.905 { 00:23:11.905 "trtype": "TCP" 00:23:11.905 } 00:23:11.905 ] 00:23:11.905 }, 00:23:11.905 { 00:23:11.905 "name": "nvmf_tgt_poll_group_002", 00:23:11.905 "admin_qpairs": 0, 00:23:11.905 "io_qpairs": 0, 00:23:11.905 "current_admin_qpairs": 0, 00:23:11.905 "current_io_qpairs": 0, 00:23:11.905 "pending_bdev_io": 0, 00:23:11.905 "completed_nvme_io": 0, 00:23:11.905 "transports": [ 00:23:11.905 { 00:23:11.905 "trtype": "TCP" 00:23:11.905 } 00:23:11.905 ] 00:23:11.905 }, 00:23:11.905 { 00:23:11.905 "name": "nvmf_tgt_poll_group_003", 00:23:11.905 "admin_qpairs": 0, 00:23:11.905 "io_qpairs": 0, 00:23:11.905 "current_admin_qpairs": 0, 00:23:11.905 "current_io_qpairs": 0, 00:23:11.905 "pending_bdev_io": 0, 00:23:11.905 "completed_nvme_io": 0, 00:23:11.905 "transports": [ 00:23:11.905 { 00:23:11.905 "trtype": "TCP" 00:23:11.905 } 00:23:11.905 ] 00:23:11.905 } 00:23:11.905 ] 00:23:11.905 }' 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:11.905 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 986056 00:23:20.063 Initializing NVMe Controllers 00:23:20.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:20.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:20.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:20.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:20.063 Initialization complete. Launching workers. 00:23:20.063 ======================================================== 00:23:20.063 Latency(us) 00:23:20.063 Device Information : IOPS MiB/s Average min max 00:23:20.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7196.30 28.11 8895.68 1254.72 53346.25 00:23:20.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8256.90 32.25 7784.15 1190.95 54945.99 00:23:20.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7871.00 30.75 8131.22 1176.77 52144.40 00:23:20.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7930.80 30.98 8077.60 1114.34 52184.86 00:23:20.063 ======================================================== 00:23:20.063 Total : 31254.99 122.09 8201.94 1114.34 54945.99 00:23:20.063 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.063 rmmod nvme_tcp 00:23:20.063 rmmod nvme_fabrics 00:23:20.063 rmmod nvme_keyring 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 985770 ']' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 985770 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 985770 ']' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 985770 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 985770 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 985770' 00:23:20.063 killing process with pid 985770 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 985770 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 985770 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.063 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:23.379 00:23:23.379 real 0m51.241s 00:23:23.379 user 2m46.547s 00:23:23.379 sys 0m10.106s 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 ************************************ 00:23:23.379 END TEST nvmf_perf_adq 00:23:23.379 ************************************ 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 ************************************ 00:23:23.379 START TEST nvmf_shutdown 00:23:23.379 ************************************ 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:23.379 * Looking for test storage... 00:23:23.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.379 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.380 --rc genhtml_branch_coverage=1 00:23:23.380 --rc genhtml_function_coverage=1 00:23:23.380 --rc genhtml_legend=1 00:23:23.380 --rc geninfo_all_blocks=1 00:23:23.380 --rc geninfo_unexecuted_blocks=1 00:23:23.380 00:23:23.380 ' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.380 --rc genhtml_branch_coverage=1 00:23:23.380 --rc genhtml_function_coverage=1 00:23:23.380 --rc genhtml_legend=1 00:23:23.380 --rc geninfo_all_blocks=1 00:23:23.380 --rc geninfo_unexecuted_blocks=1 00:23:23.380 00:23:23.380 ' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.380 --rc genhtml_branch_coverage=1 00:23:23.380 --rc genhtml_function_coverage=1 00:23:23.380 --rc genhtml_legend=1 00:23:23.380 --rc geninfo_all_blocks=1 00:23:23.380 --rc geninfo_unexecuted_blocks=1 00:23:23.380 00:23:23.380 ' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.380 --rc genhtml_branch_coverage=1 00:23:23.380 --rc genhtml_function_coverage=1 00:23:23.380 --rc genhtml_legend=1 00:23:23.380 --rc geninfo_all_blocks=1 00:23:23.380 --rc geninfo_unexecuted_blocks=1 00:23:23.380 00:23:23.380 ' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.380 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.381 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:23.381 ************************************ 00:23:23.381 START TEST nvmf_shutdown_tc1 00:23:23.381 ************************************ 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.381 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:23:29.955 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:23:29.955 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:23:29.955 Found net devices under 0000:1a:00.0: cvl_0_0 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:23:29.955 Found net devices under 0000:1a:00.1: cvl_0_1 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.955 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.956 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:23:29.956 00:23:29.956 --- 10.0.0.2 ping statistics --- 00:23:29.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.956 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:23:29.956 00:23:29.956 --- 10.0.0.1 ping statistics --- 00:23:29.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.956 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=991783 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 991783 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 991783 ']' 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.956 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.956 [2024-11-20 12:37:35.282752] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:29.956 [2024-11-20 12:37:35.282793] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.956 [2024-11-20 12:37:35.362601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.956 [2024-11-20 12:37:35.402540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.956 [2024-11-20 12:37:35.402577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.956 [2024-11-20 12:37:35.402584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.956 [2024-11-20 12:37:35.402589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.956 [2024-11-20 12:37:35.402594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.956 [2024-11-20 12:37:35.404186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.956 [2024-11-20 12:37:35.404298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.956 [2024-11-20 12:37:35.404410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.956 [2024-11-20 12:37:35.404435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.524 [2024-11-20 12:37:36.145363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.524 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.524 Malloc1 00:23:30.524 [2024-11-20 12:37:36.260719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.524 Malloc2 00:23:30.782 Malloc3 00:23:30.782 Malloc4 00:23:30.782 Malloc5 00:23:30.782 Malloc6 00:23:30.782 Malloc7 00:23:31.043 Malloc8 00:23:31.043 Malloc9 00:23:31.043 Malloc10 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=992103 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 992103 /var/tmp/bdevperf.sock 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 992103 ']' 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.043 { 00:23:31.043 "params": { 00:23:31.043 "name": "Nvme$subsystem", 00:23:31.043 "trtype": "$TEST_TRANSPORT", 00:23:31.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.043 "adrfam": "ipv4", 00:23:31.043 "trsvcid": "$NVMF_PORT", 00:23:31.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.043 "hdgst": ${hdgst:-false}, 00:23:31.043 "ddgst": ${ddgst:-false} 00:23:31.043 }, 00:23:31.043 "method": "bdev_nvme_attach_controller" 00:23:31.043 } 00:23:31.043 EOF 00:23:31.043 )") 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.043 { 00:23:31.043 "params": { 00:23:31.043 "name": "Nvme$subsystem", 00:23:31.043 "trtype": "$TEST_TRANSPORT", 00:23:31.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.043 "adrfam": "ipv4", 00:23:31.043 "trsvcid": "$NVMF_PORT", 00:23:31.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.043 "hdgst": ${hdgst:-false}, 00:23:31.043 "ddgst": ${ddgst:-false} 00:23:31.043 }, 00:23:31.043 "method": "bdev_nvme_attach_controller" 00:23:31.043 } 00:23:31.043 EOF 00:23:31.043 )") 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.043 { 00:23:31.043 "params": { 00:23:31.043 "name": "Nvme$subsystem", 00:23:31.043 "trtype": "$TEST_TRANSPORT", 00:23:31.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.043 "adrfam": "ipv4", 00:23:31.043 "trsvcid": "$NVMF_PORT", 00:23:31.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.043 "hdgst": ${hdgst:-false}, 00:23:31.043 "ddgst": ${ddgst:-false} 00:23:31.043 }, 00:23:31.043 "method": "bdev_nvme_attach_controller" 00:23:31.043 } 00:23:31.043 EOF 00:23:31.043 )") 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.043 { 00:23:31.043 "params": { 00:23:31.043 "name": "Nvme$subsystem", 00:23:31.043 "trtype": "$TEST_TRANSPORT", 00:23:31.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.043 "adrfam": "ipv4", 00:23:31.043 "trsvcid": "$NVMF_PORT", 00:23:31.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.043 "hdgst": ${hdgst:-false}, 00:23:31.043 "ddgst": ${ddgst:-false} 00:23:31.043 }, 00:23:31.043 "method": "bdev_nvme_attach_controller" 00:23:31.043 } 00:23:31.043 EOF 00:23:31.043 )") 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.043 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.044 { 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme$subsystem", 00:23:31.044 "trtype": "$TEST_TRANSPORT", 00:23:31.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "$NVMF_PORT", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.044 "hdgst": ${hdgst:-false}, 00:23:31.044 "ddgst": ${ddgst:-false} 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 } 00:23:31.044 EOF 00:23:31.044 )") 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.044 { 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme$subsystem", 00:23:31.044 "trtype": "$TEST_TRANSPORT", 00:23:31.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "$NVMF_PORT", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.044 "hdgst": ${hdgst:-false}, 00:23:31.044 "ddgst": ${ddgst:-false} 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 } 00:23:31.044 EOF 00:23:31.044 )") 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.044 [2024-11-20 12:37:36.736886] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:31.044 [2024-11-20 12:37:36.736931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.044 { 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme$subsystem", 00:23:31.044 "trtype": "$TEST_TRANSPORT", 00:23:31.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "$NVMF_PORT", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.044 "hdgst": ${hdgst:-false}, 00:23:31.044 "ddgst": ${ddgst:-false} 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 } 00:23:31.044 EOF 00:23:31.044 )") 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.044 { 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme$subsystem", 00:23:31.044 "trtype": "$TEST_TRANSPORT", 00:23:31.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "$NVMF_PORT", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.044 "hdgst": ${hdgst:-false}, 00:23:31.044 "ddgst": ${ddgst:-false} 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 } 00:23:31.044 EOF 00:23:31.044 )") 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.044 { 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme$subsystem", 00:23:31.044 "trtype": "$TEST_TRANSPORT", 00:23:31.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "$NVMF_PORT", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.044 "hdgst": ${hdgst:-false}, 00:23:31.044 "ddgst": ${ddgst:-false} 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 } 00:23:31.044 EOF 00:23:31.044 )") 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.044 { 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme$subsystem", 00:23:31.044 "trtype": "$TEST_TRANSPORT", 00:23:31.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "$NVMF_PORT", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.044 "hdgst": ${hdgst:-false}, 00:23:31.044 "ddgst": ${ddgst:-false} 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 } 00:23:31.044 EOF 00:23:31.044 )") 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:31.044 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme1", 00:23:31.044 "trtype": "tcp", 00:23:31.044 "traddr": "10.0.0.2", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "4420", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.044 "hdgst": false, 00:23:31.044 "ddgst": false 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 },{ 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme2", 00:23:31.044 "trtype": "tcp", 00:23:31.044 "traddr": "10.0.0.2", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "4420", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:31.044 "hdgst": false, 00:23:31.044 "ddgst": false 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 },{ 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme3", 00:23:31.044 "trtype": "tcp", 00:23:31.044 "traddr": "10.0.0.2", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "4420", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:31.044 "hdgst": false, 00:23:31.044 "ddgst": false 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.044 },{ 00:23:31.044 "params": { 00:23:31.044 "name": "Nvme4", 00:23:31.044 "trtype": "tcp", 00:23:31.044 "traddr": "10.0.0.2", 00:23:31.044 "adrfam": "ipv4", 00:23:31.044 "trsvcid": "4420", 00:23:31.044 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:31.044 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:31.044 "hdgst": false, 00:23:31.044 "ddgst": false 00:23:31.044 }, 00:23:31.044 "method": "bdev_nvme_attach_controller" 00:23:31.045 },{ 00:23:31.045 "params": { 00:23:31.045 "name": "Nvme5", 00:23:31.045 "trtype": "tcp", 00:23:31.045 "traddr": "10.0.0.2", 00:23:31.045 "adrfam": "ipv4", 00:23:31.045 "trsvcid": "4420", 00:23:31.045 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:31.045 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:31.045 "hdgst": false, 00:23:31.045 "ddgst": false 00:23:31.045 }, 00:23:31.045 "method": "bdev_nvme_attach_controller" 00:23:31.045 },{ 00:23:31.045 "params": { 00:23:31.045 "name": "Nvme6", 00:23:31.045 "trtype": "tcp", 00:23:31.045 "traddr": "10.0.0.2", 00:23:31.045 "adrfam": "ipv4", 00:23:31.045 "trsvcid": "4420", 00:23:31.045 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:31.045 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:31.045 "hdgst": false, 00:23:31.045 "ddgst": false 00:23:31.045 }, 00:23:31.045 "method": "bdev_nvme_attach_controller" 00:23:31.045 },{ 00:23:31.045 "params": { 00:23:31.045 "name": "Nvme7", 00:23:31.045 "trtype": "tcp", 00:23:31.045 "traddr": "10.0.0.2", 00:23:31.045 "adrfam": "ipv4", 00:23:31.045 "trsvcid": "4420", 00:23:31.045 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:31.045 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:31.045 "hdgst": false, 00:23:31.045 "ddgst": false 00:23:31.045 }, 00:23:31.045 "method": "bdev_nvme_attach_controller" 00:23:31.045 },{ 00:23:31.045 "params": { 00:23:31.045 "name": "Nvme8", 00:23:31.045 "trtype": "tcp", 00:23:31.045 "traddr": "10.0.0.2", 00:23:31.045 "adrfam": "ipv4", 00:23:31.045 "trsvcid": "4420", 00:23:31.045 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:31.045 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:31.045 "hdgst": false, 00:23:31.045 "ddgst": false 00:23:31.045 }, 00:23:31.045 "method": "bdev_nvme_attach_controller" 00:23:31.045 },{ 00:23:31.045 "params": { 00:23:31.045 "name": "Nvme9", 00:23:31.045 "trtype": "tcp", 00:23:31.045 "traddr": "10.0.0.2", 00:23:31.045 "adrfam": "ipv4", 00:23:31.045 "trsvcid": "4420", 00:23:31.045 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:31.045 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:31.045 "hdgst": false, 00:23:31.045 "ddgst": false 00:23:31.045 }, 00:23:31.045 "method": "bdev_nvme_attach_controller" 00:23:31.045 },{ 00:23:31.045 "params": { 00:23:31.045 "name": "Nvme10", 00:23:31.045 "trtype": "tcp", 00:23:31.045 "traddr": "10.0.0.2", 00:23:31.045 "adrfam": "ipv4", 00:23:31.045 "trsvcid": "4420", 00:23:31.045 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:31.045 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:31.045 "hdgst": false, 00:23:31.045 "ddgst": false 00:23:31.045 }, 00:23:31.045 "method": "bdev_nvme_attach_controller" 00:23:31.045 }' 00:23:31.304 [2024-11-20 12:37:36.812750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.304 [2024-11-20 12:37:36.852042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 992103 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:32.682 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:33.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 992103 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 991783 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.625 { 00:23:33.625 "params": { 00:23:33.625 "name": "Nvme$subsystem", 00:23:33.625 "trtype": "$TEST_TRANSPORT", 00:23:33.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.625 "adrfam": "ipv4", 00:23:33.625 "trsvcid": "$NVMF_PORT", 00:23:33.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.625 "hdgst": ${hdgst:-false}, 00:23:33.625 "ddgst": ${ddgst:-false} 00:23:33.625 }, 00:23:33.625 "method": "bdev_nvme_attach_controller" 00:23:33.625 } 00:23:33.625 EOF 00:23:33.625 )") 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.625 { 00:23:33.625 "params": { 00:23:33.625 "name": "Nvme$subsystem", 00:23:33.625 "trtype": "$TEST_TRANSPORT", 00:23:33.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.625 "adrfam": "ipv4", 00:23:33.625 "trsvcid": "$NVMF_PORT", 00:23:33.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.625 "hdgst": ${hdgst:-false}, 00:23:33.625 "ddgst": ${ddgst:-false} 00:23:33.625 }, 00:23:33.625 "method": "bdev_nvme_attach_controller" 00:23:33.625 } 00:23:33.625 EOF 00:23:33.625 )") 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.625 { 00:23:33.625 "params": { 00:23:33.625 "name": "Nvme$subsystem", 00:23:33.625 "trtype": "$TEST_TRANSPORT", 00:23:33.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.625 "adrfam": "ipv4", 00:23:33.625 "trsvcid": "$NVMF_PORT", 00:23:33.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.625 "hdgst": ${hdgst:-false}, 00:23:33.625 "ddgst": ${ddgst:-false} 00:23:33.625 }, 00:23:33.625 "method": "bdev_nvme_attach_controller" 00:23:33.625 } 00:23:33.625 EOF 00:23:33.625 )") 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.625 { 00:23:33.625 "params": { 00:23:33.625 "name": "Nvme$subsystem", 00:23:33.625 "trtype": "$TEST_TRANSPORT", 00:23:33.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.625 "adrfam": "ipv4", 00:23:33.625 "trsvcid": "$NVMF_PORT", 00:23:33.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.625 "hdgst": ${hdgst:-false}, 00:23:33.625 "ddgst": ${ddgst:-false} 00:23:33.625 }, 00:23:33.625 "method": "bdev_nvme_attach_controller" 00:23:33.625 } 00:23:33.625 EOF 00:23:33.625 )") 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.625 { 00:23:33.625 "params": { 00:23:33.625 "name": "Nvme$subsystem", 00:23:33.625 "trtype": "$TEST_TRANSPORT", 00:23:33.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.625 "adrfam": "ipv4", 00:23:33.625 "trsvcid": "$NVMF_PORT", 00:23:33.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.625 "hdgst": ${hdgst:-false}, 00:23:33.625 "ddgst": ${ddgst:-false} 00:23:33.625 }, 00:23:33.625 "method": "bdev_nvme_attach_controller" 00:23:33.625 } 00:23:33.625 EOF 00:23:33.625 )") 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.625 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.625 { 00:23:33.625 "params": { 00:23:33.625 "name": "Nvme$subsystem", 00:23:33.625 "trtype": "$TEST_TRANSPORT", 00:23:33.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "$NVMF_PORT", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.626 "hdgst": ${hdgst:-false}, 00:23:33.626 "ddgst": ${ddgst:-false} 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 } 00:23:33.626 EOF 00:23:33.626 )") 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.626 { 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme$subsystem", 00:23:33.626 "trtype": "$TEST_TRANSPORT", 00:23:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "$NVMF_PORT", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.626 "hdgst": ${hdgst:-false}, 00:23:33.626 "ddgst": ${ddgst:-false} 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 } 00:23:33.626 EOF 00:23:33.626 )") 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.626 [2024-11-20 12:37:39.258083] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:33.626 [2024-11-20 12:37:39.258131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992644 ] 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.626 { 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme$subsystem", 00:23:33.626 "trtype": "$TEST_TRANSPORT", 00:23:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "$NVMF_PORT", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.626 "hdgst": ${hdgst:-false}, 00:23:33.626 "ddgst": ${ddgst:-false} 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 } 00:23:33.626 EOF 00:23:33.626 )") 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.626 { 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme$subsystem", 00:23:33.626 "trtype": "$TEST_TRANSPORT", 00:23:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "$NVMF_PORT", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.626 "hdgst": ${hdgst:-false}, 00:23:33.626 "ddgst": ${ddgst:-false} 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 } 00:23:33.626 EOF 00:23:33.626 )") 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.626 { 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme$subsystem", 00:23:33.626 "trtype": "$TEST_TRANSPORT", 00:23:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "$NVMF_PORT", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.626 "hdgst": ${hdgst:-false}, 00:23:33.626 "ddgst": ${ddgst:-false} 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 } 00:23:33.626 EOF 00:23:33.626 )") 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:33.626 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme1", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme2", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme3", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme4", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme5", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme6", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme7", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme8", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme9", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 },{ 00:23:33.626 "params": { 00:23:33.626 "name": "Nvme10", 00:23:33.626 "trtype": "tcp", 00:23:33.626 "traddr": "10.0.0.2", 00:23:33.626 "adrfam": "ipv4", 00:23:33.626 "trsvcid": "4420", 00:23:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.626 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.626 "hdgst": false, 00:23:33.626 "ddgst": false 00:23:33.626 }, 00:23:33.626 "method": "bdev_nvme_attach_controller" 00:23:33.626 }' 00:23:33.626 [2024-11-20 12:37:39.334435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.626 [2024-11-20 12:37:39.372943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.006 Running I/O for 1 seconds... 00:23:36.386 2439.00 IOPS, 152.44 MiB/s 00:23:36.386 Latency(us) 00:23:36.386 [2024-11-20T11:37:42.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.386 Verification LBA range: start 0x0 length 0x400 00:23:36.386 Nvme1n1 : 1.07 297.74 18.61 0.00 0.00 212351.72 16443.58 206855.45 00:23:36.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme2n1 : 1.12 284.76 17.80 0.00 0.00 219086.66 15073.28 210668.45 00:23:36.387 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme3n1 : 1.14 335.68 20.98 0.00 0.00 183554.17 12511.42 199229.44 00:23:36.387 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme4n1 : 1.14 281.46 17.59 0.00 0.00 216060.09 16205.27 217341.21 00:23:36.387 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme5n1 : 1.13 283.08 17.69 0.00 0.00 211929.46 17396.83 184930.68 00:23:36.387 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme6n1 : 1.13 282.09 17.63 0.00 0.00 209913.67 16920.20 201135.94 00:23:36.387 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme7n1 : 1.15 334.55 20.91 0.00 0.00 174759.25 15013.70 198276.19 00:23:36.387 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme8n1 : 1.15 333.95 20.87 0.00 0.00 172161.16 8460.10 194463.19 00:23:36.387 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme9n1 : 1.14 280.63 17.54 0.00 0.00 202633.59 18469.24 207808.70 00:23:36.387 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.387 Verification LBA range: start 0x0 length 0x400 00:23:36.387 Nvme10n1 : 1.14 279.56 17.47 0.00 0.00 200459.92 15966.95 213528.20 00:23:36.387 [2024-11-20T11:37:42.151Z] =================================================================================================================== 00:23:36.387 [2024-11-20T11:37:42.151Z] Total : 2993.49 187.09 0.00 0.00 198962.70 8460.10 217341.21 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.387 rmmod nvme_tcp 00:23:36.387 rmmod nvme_fabrics 00:23:36.387 rmmod nvme_keyring 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 991783 ']' 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 991783 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 991783 ']' 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 991783 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.387 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991783 00:23:36.646 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.646 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.646 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991783' 00:23:36.646 killing process with pid 991783 00:23:36.646 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 991783 00:23:36.646 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 991783 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.905 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.442 00:23:39.442 real 0m15.611s 00:23:39.442 user 0m34.149s 00:23:39.442 sys 0m5.919s 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 ************************************ 00:23:39.442 END TEST nvmf_shutdown_tc1 00:23:39.442 ************************************ 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 ************************************ 00:23:39.442 START TEST nvmf_shutdown_tc2 00:23:39.442 ************************************ 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:23:39.442 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:23:39.442 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:23:39.442 Found net devices under 0000:1a:00.0: cvl_0_0 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.442 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:23:39.443 Found net devices under 0000:1a:00.1: cvl_0_1 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:23:39.443 00:23:39.443 --- 10.0.0.2 ping statistics --- 00:23:39.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.443 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:23:39.443 00:23:39.443 --- 10.0.0.1 ping statistics --- 00:23:39.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.443 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.443 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=993798 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 993798 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 993798 ']' 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.443 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.443 [2024-11-20 12:37:45.061471] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:39.443 [2024-11-20 12:37:45.061513] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.443 [2024-11-20 12:37:45.136086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.443 [2024-11-20 12:37:45.175745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.443 [2024-11-20 12:37:45.175782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.443 [2024-11-20 12:37:45.175788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.443 [2024-11-20 12:37:45.175794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.443 [2024-11-20 12:37:45.175798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.443 [2024-11-20 12:37:45.177328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.443 [2024-11-20 12:37:45.177449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.443 [2024-11-20 12:37:45.177562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.443 [2024-11-20 12:37:45.177564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.382 [2024-11-20 12:37:45.913305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.382 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.382 Malloc1 00:23:40.382 [2024-11-20 12:37:46.022647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.382 Malloc2 00:23:40.382 Malloc3 00:23:40.382 Malloc4 00:23:40.642 Malloc5 00:23:40.642 Malloc6 00:23:40.642 Malloc7 00:23:40.642 Malloc8 00:23:40.642 Malloc9 00:23:40.642 Malloc10 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=994112 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 994112 /var/tmp/bdevperf.sock 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 994112 ']' 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.903 { 00:23:40.903 "params": { 00:23:40.903 "name": "Nvme$subsystem", 00:23:40.903 "trtype": "$TEST_TRANSPORT", 00:23:40.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.903 "adrfam": "ipv4", 00:23:40.903 "trsvcid": "$NVMF_PORT", 00:23:40.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.903 "hdgst": ${hdgst:-false}, 00:23:40.903 "ddgst": ${ddgst:-false} 00:23:40.903 }, 00:23:40.903 "method": "bdev_nvme_attach_controller" 00:23:40.903 } 00:23:40.903 EOF 00:23:40.903 )") 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.903 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 [2024-11-20 12:37:46.492996] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:40.904 [2024-11-20 12:37:46.493043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994112 ] 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.904 { 00:23:40.904 "params": { 00:23:40.904 "name": "Nvme$subsystem", 00:23:40.904 "trtype": "$TEST_TRANSPORT", 00:23:40.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.904 "adrfam": "ipv4", 00:23:40.904 "trsvcid": "$NVMF_PORT", 00:23:40.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.904 "hdgst": ${hdgst:-false}, 00:23:40.904 "ddgst": ${ddgst:-false} 00:23:40.904 }, 00:23:40.904 "method": "bdev_nvme_attach_controller" 00:23:40.904 } 00:23:40.904 EOF 00:23:40.904 )") 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.904 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.905 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.905 { 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme$subsystem", 00:23:40.905 "trtype": "$TEST_TRANSPORT", 00:23:40.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "$NVMF_PORT", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.905 "hdgst": ${hdgst:-false}, 00:23:40.905 "ddgst": ${ddgst:-false} 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 } 00:23:40.905 EOF 00:23:40.905 )") 00:23:40.905 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.905 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:40.905 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:40.905 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme1", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme2", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme3", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme4", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme5", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme6", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme7", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme8", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme9", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 },{ 00:23:40.905 "params": { 00:23:40.905 "name": "Nvme10", 00:23:40.905 "trtype": "tcp", 00:23:40.905 "traddr": "10.0.0.2", 00:23:40.905 "adrfam": "ipv4", 00:23:40.905 "trsvcid": "4420", 00:23:40.905 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.905 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.905 "hdgst": false, 00:23:40.905 "ddgst": false 00:23:40.905 }, 00:23:40.905 "method": "bdev_nvme_attach_controller" 00:23:40.905 }' 00:23:40.905 [2024-11-20 12:37:46.566036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.905 [2024-11-20 12:37:46.603732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.812 Running I/O for 10 seconds... 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=200 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 200 -ge 100 ']' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 994112 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 994112 ']' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 994112 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 994112 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 994112' 00:23:43.382 killing process with pid 994112 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 994112 00:23:43.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 994112 00:23:43.647 Received shutdown signal, test time was about 0.826863 seconds 00:23:43.647 00:23:43.647 Latency(us) 00:23:43.647 [2024-11-20T11:37:49.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.647 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme1n1 : 0.81 327.72 20.48 0.00 0.00 191329.06 7804.74 192556.68 00:23:43.647 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme2n1 : 0.82 311.31 19.46 0.00 0.00 199450.07 17992.61 203995.69 00:23:43.647 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme3n1 : 0.81 317.55 19.85 0.00 0.00 191380.01 14477.50 199229.44 00:23:43.647 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme4n1 : 0.81 317.02 19.81 0.00 0.00 188186.76 23712.12 189696.93 00:23:43.647 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme5n1 : 0.83 310.06 19.38 0.00 0.00 189444.89 15609.48 203995.69 00:23:43.647 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme6n1 : 0.82 325.12 20.32 0.00 0.00 176489.50 1750.11 192556.68 00:23:43.647 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme7n1 : 0.82 312.72 19.54 0.00 0.00 180889.95 15132.86 203042.44 00:23:43.647 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme8n1 : 0.83 309.82 19.36 0.00 0.00 178873.72 13762.56 203042.44 00:23:43.647 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme9n1 : 0.80 240.34 15.02 0.00 0.00 225376.50 17277.67 222107.46 00:23:43.647 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.647 Verification LBA range: start 0x0 length 0x400 00:23:43.647 Nvme10n1 : 0.80 241.10 15.07 0.00 0.00 219986.23 17277.67 203995.69 00:23:43.647 [2024-11-20T11:37:49.411Z] =================================================================================================================== 00:23:43.647 [2024-11-20T11:37:49.411Z] Total : 3012.76 188.30 0.00 0.00 192573.89 1750.11 222107.46 00:23:43.647 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 993798 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.026 rmmod nvme_tcp 00:23:45.026 rmmod nvme_fabrics 00:23:45.026 rmmod nvme_keyring 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:45.026 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 993798 ']' 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 993798 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 993798 ']' 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 993798 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 993798 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 993798' 00:23:45.027 killing process with pid 993798 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 993798 00:23:45.027 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 993798 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.286 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.192 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.192 00:23:47.192 real 0m8.227s 00:23:47.192 user 0m25.661s 00:23:47.192 sys 0m1.323s 00:23:47.192 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.192 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.192 ************************************ 00:23:47.192 END TEST nvmf_shutdown_tc2 00:23:47.192 ************************************ 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:47.452 ************************************ 00:23:47.452 START TEST nvmf_shutdown_tc3 00:23:47.452 ************************************ 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.452 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.452 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:23:47.453 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:23:47.453 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:23:47.453 Found net devices under 0000:1a:00.0: cvl_0_0 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:23:47.453 Found net devices under 0000:1a:00.1: cvl_0_1 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:47.453 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:47.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:23:47.715 00:23:47.715 --- 10.0.0.2 ping statistics --- 00:23:47.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.715 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:23:47.715 00:23:47.715 --- 10.0.0.1 ping statistics --- 00:23:47.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.715 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=995304 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 995304 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 995304 ']' 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.715 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 [2024-11-20 12:37:53.384117] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:47.715 [2024-11-20 12:37:53.384158] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.715 [2024-11-20 12:37:53.460343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.974 [2024-11-20 12:37:53.501143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.974 [2024-11-20 12:37:53.501182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.974 [2024-11-20 12:37:53.501188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.974 [2024-11-20 12:37:53.501194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.975 [2024-11-20 12:37:53.501198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.975 [2024-11-20 12:37:53.502962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.975 [2024-11-20 12:37:53.503078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.975 [2024-11-20 12:37:53.503189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.975 [2024-11-20 12:37:53.503190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.547 [2024-11-20 12:37:54.231977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.547 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.805 Malloc1 00:23:48.805 [2024-11-20 12:37:54.348822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.805 Malloc2 00:23:48.805 Malloc3 00:23:48.805 Malloc4 00:23:48.805 Malloc5 00:23:48.805 Malloc6 00:23:49.064 Malloc7 00:23:49.064 Malloc8 00:23:49.064 Malloc9 00:23:49.064 Malloc10 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=995605 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 995605 /var/tmp/bdevperf.sock 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 995605 ']' 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:49.064 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.065 "hdgst": ${hdgst:-false}, 00:23:49.065 "ddgst": ${ddgst:-false} 00:23:49.065 }, 00:23:49.065 "method": "bdev_nvme_attach_controller" 00:23:49.065 } 00:23:49.065 EOF 00:23:49.065 )") 00:23:49.065 [2024-11-20 12:37:54.813575] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:49.065 [2024-11-20 12:37:54.813621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995605 ] 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.065 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.065 { 00:23:49.065 "params": { 00:23:49.065 "name": "Nvme$subsystem", 00:23:49.065 "trtype": "$TEST_TRANSPORT", 00:23:49.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.065 "adrfam": "ipv4", 00:23:49.065 "trsvcid": "$NVMF_PORT", 00:23:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.066 "hdgst": ${hdgst:-false}, 00:23:49.066 "ddgst": ${ddgst:-false} 00:23:49.066 }, 00:23:49.066 "method": "bdev_nvme_attach_controller" 00:23:49.066 } 00:23:49.066 EOF 00:23:49.066 )") 00:23:49.066 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.325 { 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme$subsystem", 00:23:49.325 "trtype": "$TEST_TRANSPORT", 00:23:49.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "$NVMF_PORT", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.325 "hdgst": ${hdgst:-false}, 00:23:49.325 "ddgst": ${ddgst:-false} 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 } 00:23:49.325 EOF 00:23:49.325 )") 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:49.325 { 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme$subsystem", 00:23:49.325 "trtype": "$TEST_TRANSPORT", 00:23:49.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "$NVMF_PORT", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.325 "hdgst": ${hdgst:-false}, 00:23:49.325 "ddgst": ${ddgst:-false} 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 } 00:23:49.325 EOF 00:23:49.325 )") 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:49.325 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme1", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme2", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme3", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme4", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme5", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme6", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme7", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme8", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme9", 00:23:49.325 "trtype": "tcp", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "adrfam": "ipv4", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:49.325 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:49.325 "hdgst": false, 00:23:49.325 "ddgst": false 00:23:49.325 }, 00:23:49.325 "method": "bdev_nvme_attach_controller" 00:23:49.325 },{ 00:23:49.325 "params": { 00:23:49.325 "name": "Nvme10", 00:23:49.326 "trtype": "tcp", 00:23:49.326 "traddr": "10.0.0.2", 00:23:49.326 "adrfam": "ipv4", 00:23:49.326 "trsvcid": "4420", 00:23:49.326 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:49.326 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:49.326 "hdgst": false, 00:23:49.326 "ddgst": false 00:23:49.326 }, 00:23:49.326 "method": "bdev_nvme_attach_controller" 00:23:49.326 }' 00:23:49.326 [2024-11-20 12:37:54.890720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.326 [2024-11-20 12:37:54.929025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.230 Running I/O for 10 seconds... 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:51.230 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.231 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 995304 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 995304 ']' 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 995304 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 995304 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 995304' 00:23:51.504 killing process with pid 995304 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 995304 00:23:51.504 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 995304 00:23:51.504 [2024-11-20 12:37:57.079231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.504 [2024-11-20 12:37:57.079568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.079668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cb40 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.080997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.505 [2024-11-20 12:37:57.081003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914ba0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.081989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fc080 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.506 [2024-11-20 12:37:57.082233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.506 [2024-11-20 12:37:57.082241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf855f0 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.506 [2024-11-20 12:37:57.082516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.082719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d030 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.507 [2024-11-20 12:37:57.085531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085536] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:51.508 [2024-11-20 12:37:57.085566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085605] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:51.508 [2024-11-20 12:37:57.085610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.085661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d500 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.086972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d9f0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d9f0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d9f0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d9f0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d9f0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d9f0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.508 [2024-11-20 12:37:57.087731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.087848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dec0 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.088569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e240 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.088589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e240 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.088600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e240 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.088606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e240 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.509 [2024-11-20 12:37:57.089586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.089772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e710 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.510 [2024-11-20 12:37:57.090778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.090933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ec00 is same with the state(6) to be set 00:23:51.511 [2024-11-20 12:37:57.095698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.511 [2024-11-20 12:37:57.095972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.511 [2024-11-20 12:37:57.095979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.095986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.095992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.095999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.512 [2024-11-20 12:37:57.096428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.512 [2024-11-20 12:37:57.096435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.513 [2024-11-20 12:37:57.096611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:51.513 [2024-11-20 12:37:57.096868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.096885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.096899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.096912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.096924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf830b0 is same with the state(6) to be set 00:23:51.513 [2024-11-20 12:37:57.096947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fc080 (9): Bad file descriptor 00:23:51.513 [2024-11-20 12:37:57.096974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.096982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.096992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6740 is same with the state(6) to be set 00:23:51.513 [2024-11-20 12:37:57.097069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98610 is same with the state(6) to be set 00:23:51.513 [2024-11-20 12:37:57.097149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.513 [2024-11-20 12:37:57.097207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c29d0 is same with the state(6) to be set 00:23:51.513 [2024-11-20 12:37:57.097229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.513 [2024-11-20 12:37:57.097240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5580 is same with the state(6) to be set 00:23:51.514 [2024-11-20 12:37:57.097311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0900 is same with the state(6) to be set 00:23:51.514 [2024-11-20 12:37:57.097418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afbe0 is same with the state(6) to be set 00:23:51.514 [2024-11-20 12:37:57.097504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.514 [2024-11-20 12:37:57.097558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf85190 is same with the state(6) to be set 00:23:51.514 [2024-11-20 12:37:57.097580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf855f0 (9): Bad file descriptor 00:23:51.514 [2024-11-20 12:37:57.097726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.514 [2024-11-20 12:37:57.097986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.514 [2024-11-20 12:37:57.097993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.098344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.098353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.105551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.105563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.105570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.105579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.105585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.105592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.105598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.515 [2024-11-20 12:37:57.105606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.515 [2024-11-20 12:37:57.105611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.105899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.105905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.107015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:51.516 [2024-11-20 12:37:57.107045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6740 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf830b0 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98610 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c29d0 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5580 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0900 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13afbe0 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf85190 (9): Bad file descriptor 00:23:51.516 [2024-11-20 12:37:57.107199] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:51.516 [2024-11-20 12:37:57.108693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.516 [2024-11-20 12:37:57.108801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.516 [2024-11-20 12:37:57.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.108987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.108994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.517 [2024-11-20 12:37:57.109213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.517 [2024-11-20 12:37:57.109221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.109563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.109571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189910 is same with the state(6) to be set 00:23:51.518 [2024-11-20 12:37:57.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.111239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.111267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.111276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.111286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.111295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.111305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.111313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.111323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.518 [2024-11-20 12:37:57.111331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.518 [2024-11-20 12:37:57.111340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b950 is same with the state(6) to be set 00:23:51.519 [2024-11-20 12:37:57.111434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.111982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.519 [2024-11-20 12:37:57.111990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.519 [2024-11-20 12:37:57.112000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.520 [2024-11-20 12:37:57.112564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.520 [2024-11-20 12:37:57.112572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.112582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.112590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.112600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.112607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.112616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138ce70 is same with the state(6) to be set 00:23:51.521 [2024-11-20 12:37:57.113809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:51.521 [2024-11-20 12:37:57.113828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:51.521 [2024-11-20 12:37:57.113845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:51.521 [2024-11-20 12:37:57.114134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.521 [2024-11-20 12:37:57.114152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a6740 with addr=10.0.0.2, port=4420 00:23:51.521 [2024-11-20 12:37:57.114162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6740 is same with the state(6) to be set 00:23:51.521 [2024-11-20 12:37:57.114528] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:51.521 [2024-11-20 12:37:57.114593] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:51.521 [2024-11-20 12:37:57.114637] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:51.521 [2024-11-20 12:37:57.115507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:51.521 [2024-11-20 12:37:57.115682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.521 [2024-11-20 12:37:57.115699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13afbe0 with addr=10.0.0.2, port=4420 00:23:51.521 [2024-11-20 12:37:57.115709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afbe0 is same with the state(6) to be set 00:23:51.521 [2024-11-20 12:37:57.115951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.521 [2024-11-20 12:37:57.115964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf855f0 with addr=10.0.0.2, port=4420 00:23:51.521 [2024-11-20 12:37:57.115973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf855f0 is same with the state(6) to be set 00:23:51.521 [2024-11-20 12:37:57.116077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.521 [2024-11-20 12:37:57.116088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fc080 with addr=10.0.0.2, port=4420 00:23:51.521 [2024-11-20 12:37:57.116097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fc080 is same with the state(6) to be set 00:23:51.521 [2024-11-20 12:37:57.116108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6740 (9): Bad file descriptor 00:23:51.521 [2024-11-20 12:37:57.116421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.521 [2024-11-20 12:37:57.116818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.521 [2024-11-20 12:37:57.116826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.116983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.116991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.522 [2024-11-20 12:37:57.117402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.522 [2024-11-20 12:37:57.117427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.117611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.117620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142adf0 is same with the state(6) to be set 00:23:51.523 [2024-11-20 12:37:57.118180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.523 [2024-11-20 12:37:57.118200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5580 with addr=10.0.0.2, port=4420 00:23:51.523 [2024-11-20 12:37:57.118210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5580 is same with the state(6) to be set 00:23:51.523 [2024-11-20 12:37:57.118222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13afbe0 (9): Bad file descriptor 00:23:51.523 [2024-11-20 12:37:57.118234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf855f0 (9): Bad file descriptor 00:23:51.523 [2024-11-20 12:37:57.118244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fc080 (9): Bad file descriptor 00:23:51.523 [2024-11-20 12:37:57.118254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:51.523 [2024-11-20 12:37:57.118262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:51.523 [2024-11-20 12:37:57.118272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:51.523 [2024-11-20 12:37:57.118281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:51.523 [2024-11-20 12:37:57.118294] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:51.523 [2024-11-20 12:37:57.119810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:51.523 [2024-11-20 12:37:57.119864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5580 (9): Bad file descriptor 00:23:51.523 [2024-11-20 12:37:57.119876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:51.523 [2024-11-20 12:37:57.119884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:51.523 [2024-11-20 12:37:57.119892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:51.523 [2024-11-20 12:37:57.119901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:51.523 [2024-11-20 12:37:57.119913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:51.523 [2024-11-20 12:37:57.119921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:51.523 [2024-11-20 12:37:57.119928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:51.523 [2024-11-20 12:37:57.119936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:51.523 [2024-11-20 12:37:57.119943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:51.523 [2024-11-20 12:37:57.119950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:51.523 [2024-11-20 12:37:57.119958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:51.523 [2024-11-20 12:37:57.119965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:51.523 [2024-11-20 12:37:57.120015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.523 [2024-11-20 12:37:57.120184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.523 [2024-11-20 12:37:57.120190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.524 [2024-11-20 12:37:57.120652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.524 [2024-11-20 12:37:57.120659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.120922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.120928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118aae0 is same with the state(6) to be set 00:23:51.525 [2024-11-20 12:37:57.121845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.525 [2024-11-20 12:37:57.121987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.525 [2024-11-20 12:37:57.121994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.526 [2024-11-20 12:37:57.122343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.526 [2024-11-20 12:37:57.122349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.122718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.122725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13863c0 is same with the state(6) to be set 00:23:51.527 [2024-11-20 12:37:57.123623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.123635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.123653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.527 [2024-11-20 12:37:57.123661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.527 [2024-11-20 12:37:57.123669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.123992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.123998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.528 [2024-11-20 12:37:57.124093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.528 [2024-11-20 12:37:57.124100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.124500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.124506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1388ea0 is same with the state(6) to be set 00:23:51.529 [2024-11-20 12:37:57.125396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.529 [2024-11-20 12:37:57.125406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.529 [2024-11-20 12:37:57.125420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.530 [2024-11-20 12:37:57.125856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.530 [2024-11-20 12:37:57.125863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.125987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.125993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.531 [2024-11-20 12:37:57.126233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.531 [2024-11-20 12:37:57.126240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.532 [2024-11-20 12:37:57.126246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.532 [2024-11-20 12:37:57.126253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.532 [2024-11-20 12:37:57.126259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.532 [2024-11-20 12:37:57.126266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138a420 is same with the state(6) to be set 00:23:51.532 [2024-11-20 12:37:57.128243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.128266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.128275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:51.532 task offset: 20736 on job bdev=Nvme6n1 fails 00:23:51.532 00:23:51.532 Latency(us) 00:23:51.532 [2024-11-20T11:37:57.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.532 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme1n1 ended in about 0.62 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme1n1 : 0.62 212.75 13.30 103.15 0.00 199303.30 23354.65 183024.17 00:23:51.532 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme2n1 ended in about 0.63 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme2n1 : 0.63 202.60 12.66 101.30 0.00 202234.72 26571.87 185883.93 00:23:51.532 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme3n1 ended in about 0.63 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme3n1 : 0.63 203.40 12.71 101.70 0.00 196702.02 12690.15 203042.44 00:23:51.532 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme4n1 ended in about 0.62 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme4n1 : 0.62 310.62 19.41 103.54 0.00 141126.87 12094.37 186837.18 00:23:51.532 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme5n1 ended in about 0.63 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme5n1 : 0.63 202.03 12.63 101.01 0.00 188658.04 15252.01 193509.93 00:23:51.532 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme6n1 ended in about 0.62 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme6n1 : 0.62 207.48 12.97 103.74 0.00 178474.05 10604.92 202089.19 00:23:51.532 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme7n1 ended in about 0.64 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme7n1 : 0.64 236.09 14.76 100.73 0.00 161390.06 16920.20 195416.44 00:23:51.532 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme8n1 ended in about 0.64 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme8n1 : 0.64 200.91 12.56 100.45 0.00 175766.19 15252.01 198276.19 00:23:51.532 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme9n1 ended in about 0.63 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme9n1 : 0.63 204.70 12.79 8.00 0.00 238602.55 17277.67 229733.47 00:23:51.532 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.532 Job: Nvme10n1 ended in about 0.62 seconds with error 00:23:51.532 Verification LBA range: start 0x0 length 0x400 00:23:51.532 Nvme10n1 : 0.62 102.63 6.41 102.63 0.00 242934.69 24665.37 228780.22 00:23:51.532 [2024-11-20T11:37:57.296Z] =================================================================================================================== 00:23:51.532 [2024-11-20T11:37:57.296Z] Total : 2083.22 130.20 926.27 0.00 187276.89 10604.92 229733.47 00:23:51.532 [2024-11-20 12:37:57.150643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:51.532 [2024-11-20 12:37:57.151058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.532 [2024-11-20 12:37:57.151078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf830b0 with addr=10.0.0.2, port=4420 00:23:51.532 [2024-11-20 12:37:57.151088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf830b0 is same with the state(6) to be set 00:23:51.532 [2024-11-20 12:37:57.151097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:51.532 [2024-11-20 12:37:57.151103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:51.532 [2024-11-20 12:37:57.151111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:51.532 [2024-11-20 12:37:57.151125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:51.532 [2024-11-20 12:37:57.151174] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:51.532 [2024-11-20 12:37:57.151471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.151485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.151785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.532 [2024-11-20 12:37:57.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf85190 with addr=10.0.0.2, port=4420 00:23:51.532 [2024-11-20 12:37:57.151807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf85190 is same with the state(6) to be set 00:23:51.532 [2024-11-20 12:37:57.152013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.532 [2024-11-20 12:37:57.152023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0900 with addr=10.0.0.2, port=4420 00:23:51.532 [2024-11-20 12:37:57.152031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0900 is same with the state(6) to be set 00:23:51.532 [2024-11-20 12:37:57.152285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.532 [2024-11-20 12:37:57.152294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe98610 with addr=10.0.0.2, port=4420 00:23:51.532 [2024-11-20 12:37:57.152301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98610 is same with the state(6) to be set 00:23:51.532 [2024-11-20 12:37:57.152314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf830b0 (9): Bad file descriptor 00:23:51.532 [2024-11-20 12:37:57.152327] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:51.532 [2024-11-20 12:37:57.152345] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:51.532 [2024-11-20 12:37:57.152354] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:51.532 [2024-11-20 12:37:57.152365] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:51.532 [2024-11-20 12:37:57.152374] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:51.532 [2024-11-20 12:37:57.153463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.153480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.153489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.153497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:51.532 [2024-11-20 12:37:57.153714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.532 [2024-11-20 12:37:57.153726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c29d0 with addr=10.0.0.2, port=4420 00:23:51.533 [2024-11-20 12:37:57.153734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c29d0 is same with the state(6) to be set 00:23:51.533 [2024-11-20 12:37:57.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.533 [2024-11-20 12:37:57.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a6740 with addr=10.0.0.2, port=4420 00:23:51.533 [2024-11-20 12:37:57.153972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6740 is same with the state(6) to be set 00:23:51.533 [2024-11-20 12:37:57.153985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf85190 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.153994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0900 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.154001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98610 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.154009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.154015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.154022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.154029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.533 [2024-11-20 12:37:57.154619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fc080 with addr=10.0.0.2, port=4420 00:23:51.533 [2024-11-20 12:37:57.154626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fc080 is same with the state(6) to be set 00:23:51.533 [2024-11-20 12:37:57.154700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.533 [2024-11-20 12:37:57.154709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf855f0 with addr=10.0.0.2, port=4420 00:23:51.533 [2024-11-20 12:37:57.154715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf855f0 is same with the state(6) to be set 00:23:51.533 [2024-11-20 12:37:57.154857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.533 [2024-11-20 12:37:57.154865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13afbe0 with addr=10.0.0.2, port=4420 00:23:51.533 [2024-11-20 12:37:57.154872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afbe0 is same with the state(6) to be set 00:23:51.533 [2024-11-20 12:37:57.155019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.533 [2024-11-20 12:37:57.155029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f5580 with addr=10.0.0.2, port=4420 00:23:51.533 [2024-11-20 12:37:57.155036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f5580 is same with the state(6) to be set 00:23:51.533 [2024-11-20 12:37:57.155045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c29d0 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.155054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6740 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.155062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fc080 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.155204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf855f0 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.155212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13afbe0 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.155219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f5580 (9): Bad file descriptor 00:23:51.533 [2024-11-20 12:37:57.155226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:51.533 [2024-11-20 12:37:57.155635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:51.533 [2024-11-20 12:37:57.155643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:51.533 [2024-11-20 12:37:57.155648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:51.533 [2024-11-20 12:37:57.155653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:51.793 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 995605 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 995605 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 995605 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.805 rmmod nvme_tcp 00:23:52.805 rmmod nvme_fabrics 00:23:52.805 rmmod nvme_keyring 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 995304 ']' 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 995304 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 995304 ']' 00:23:52.805 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 995304 00:23:52.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (995304) - No such process 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 995304 is not found' 00:23:52.806 Process with pid 995304 is not found 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.806 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.341 00:23:55.341 real 0m7.628s 00:23:55.341 user 0m18.396s 00:23:55.341 sys 0m1.214s 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 ************************************ 00:23:55.341 END TEST nvmf_shutdown_tc3 00:23:55.341 ************************************ 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 ************************************ 00:23:55.341 START TEST nvmf_shutdown_tc4 00:23:55.341 ************************************ 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:23:55.341 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:23:55.341 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.341 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:23:55.342 Found net devices under 0000:1a:00.0: cvl_0_0 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:23:55.342 Found net devices under 0000:1a:00.1: cvl_0_1 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:55.342 00:23:55.342 --- 10.0.0.2 ping statistics --- 00:23:55.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.342 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:23:55.342 00:23:55.342 --- 10.0.0.1 ping statistics --- 00:23:55.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.342 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.342 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=996864 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 996864 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 996864 ']' 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.342 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:55.342 [2024-11-20 12:38:01.070542] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:55.342 [2024-11-20 12:38:01.070584] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.600 [2024-11-20 12:38:01.147751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.600 [2024-11-20 12:38:01.187336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.600 [2024-11-20 12:38:01.187373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.600 [2024-11-20 12:38:01.187379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.600 [2024-11-20 12:38:01.187385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.600 [2024-11-20 12:38:01.187390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.600 [2024-11-20 12:38:01.189025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.600 [2024-11-20 12:38:01.189138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.600 [2024-11-20 12:38:01.189249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.600 [2024-11-20 12:38:01.189251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:56.168 [2024-11-20 12:38:01.920986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.168 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.427 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:56.427 Malloc1 00:23:56.427 [2024-11-20 12:38:02.039181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.427 Malloc2 00:23:56.427 Malloc3 00:23:56.427 Malloc4 00:23:56.427 Malloc5 00:23:56.686 Malloc6 00:23:56.686 Malloc7 00:23:56.686 Malloc8 00:23:56.686 Malloc9 00:23:56.686 Malloc10 00:23:56.686 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.686 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:56.686 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.686 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:56.944 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=997147 00:23:56.944 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:56.944 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:56.944 [2024-11-20 12:38:02.542058] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 996864 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 996864 ']' 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 996864 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 996864 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 996864' 00:24:02.221 killing process with pid 996864 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 996864 00:24:02.221 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 996864 00:24:02.221 [2024-11-20 12:38:07.530470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.530572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bd760 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.531638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdc30 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.531666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdc30 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.531675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdc30 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.531694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdc30 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.531701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdc30 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.532694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.532724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.532732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.532739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.532746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.221 [2024-11-20 12:38:07.532753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.222 [2024-11-20 12:38:07.532759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.222 [2024-11-20 12:38:07.532766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.222 [2024-11-20 12:38:07.532773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26be100 is same with the state(6) to be set 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 [2024-11-20 12:38:07.536941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2625990 is same with the state(6) to be set 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 [2024-11-20 12:38:07.536970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2625990 is same with the state(6) to be set 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 [2024-11-20 12:38:07.537029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 [2024-11-20 12:38:07.537678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26beaa0 is same with starting I/O failed: -6 00:24:02.222 the state(6) to be set 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 [2024-11-20 12:38:07.537703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26beaa0 is same with the state(6) to be set 00:24:02.222 starting I/O failed: -6 00:24:02.222 [2024-11-20 12:38:07.537711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26beaa0 is same with the state(6) to be set 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 [2024-11-20 12:38:07.537719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26beaa0 is same with the state(6) to be set 00:24:02.222 [2024-11-20 12:38:07.537725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26beaa0 is same with the state(6) to be set 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 [2024-11-20 12:38:07.537908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.222 Write completed with error (sct=0, sc=8) 00:24:02.222 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 [2024-11-20 12:38:07.538758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 [2024-11-20 12:38:07.540129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.223 NVMe io qpair process completion error 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 [2024-11-20 12:38:07.540733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with the state(6) to be set 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 [2024-11-20 12:38:07.540758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with the state(6) to be set 00:24:02.223 [2024-11-20 12:38:07.540766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with the state(6) to be set 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 [2024-11-20 12:38:07.540773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with the state(6) to be set 00:24:02.223 starting I/O failed: -6 00:24:02.223 [2024-11-20 12:38:07.540780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with the state(6) to be set 00:24:02.223 [2024-11-20 12:38:07.540787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with Write completed with error (sct=0, sc=8) 00:24:02.223 the state(6) to be set 00:24:02.223 [2024-11-20 12:38:07.540795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with Write completed with error (sct=0, sc=8) 00:24:02.223 the state(6) to be set 00:24:02.223 [2024-11-20 12:38:07.540811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with the state(6) to be set 00:24:02.223 [2024-11-20 12:38:07.540818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627b60 is same with Write completed with error (sct=0, sc=8) 00:24:02.223 the state(6) to be set 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 [2024-11-20 12:38:07.540883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2626cf0 is same with the state(6) to be set 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 Write completed with error (sct=0, sc=8) 00:24:02.223 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.541043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.224 [2024-11-20 12:38:07.541309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with Write completed with error (sct=0, sc=8) 00:24:02.224 the state(6) to be set 00:24:02.224 [2024-11-20 12:38:07.541330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 [2024-11-20 12:38:07.541338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 [2024-11-20 12:38:07.541345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 [2024-11-20 12:38:07.541352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.541359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 [2024-11-20 12:38:07.541365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 [2024-11-20 12:38:07.541372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.541379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623720 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 [2024-11-20 12:38:07.541716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623c10 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.541735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623c10 is same with the state(6) to be set 00:24:02.224 [2024-11-20 12:38:07.541741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623c10 is same with Write completed with error (sct=0, sc=8) 00:24:02.224 the state(6) to be set 00:24:02.224 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.541748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623c10 is same with the state(6) to be set 00:24:02.224 [2024-11-20 12:38:07.541754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623c10 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 [2024-11-20 12:38:07.541760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2623c10 is same with the state(6) to be set 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.542006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.224 starting I/O failed: -6 00:24:02.224 starting I/O failed: -6 00:24:02.224 starting I/O failed: -6 00:24:02.224 starting I/O failed: -6 00:24:02.224 starting I/O failed: -6 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 [2024-11-20 12:38:07.543032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.224 starting I/O failed: -6 00:24:02.224 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 [2024-11-20 12:38:07.544386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.225 NVMe io qpair process completion error 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 [2024-11-20 12:38:07.547661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 [2024-11-20 12:38:07.548512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.225 Write completed with error (sct=0, sc=8) 00:24:02.225 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 [2024-11-20 12:38:07.548975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with starting I/O failed: -6 00:24:02.226 the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 [2024-11-20 12:38:07.548997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with the state(6) to be set 00:24:02.226 starting I/O failed: -6 00:24:02.226 [2024-11-20 12:38:07.549004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with Write completed with error (sct=0, sc=8) 00:24:02.226 the state(6) to be set 00:24:02.226 starting I/O failed: -6 00:24:02.226 [2024-11-20 12:38:07.549017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 [2024-11-20 12:38:07.549029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with Write completed with error (sct=0, sc=8) 00:24:02.226 the state(6) to be set 00:24:02.226 starting I/O failed: -6 00:24:02.226 [2024-11-20 12:38:07.549046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628500 is same with the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 [2024-11-20 12:38:07.549296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.226 [2024-11-20 12:38:07.549296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba3a0 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba3a0 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba3a0 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba3a0 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba3a0 is same with the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 [2024-11-20 12:38:07.549607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba890 is same with starting I/O failed: -6 00:24:02.226 the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 [2024-11-20 12:38:07.549627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba890 is same with the state(6) to be set 00:24:02.226 starting I/O failed: -6 00:24:02.226 [2024-11-20 12:38:07.549633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba890 is same with the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba890 is same with the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 [2024-11-20 12:38:07.549645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba890 is same with starting I/O failed: -6 00:24:02.226 the state(6) to be set 00:24:02.226 [2024-11-20 12:38:07.549655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ba890 is same with the state(6) to be set 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.226 Write completed with error (sct=0, sc=8) 00:24:02.226 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 [2024-11-20 12:38:07.550038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628030 is same with the state(6) to be set 00:24:02.227 starting I/O failed: -6 00:24:02.227 [2024-11-20 12:38:07.550054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628030 is same with the state(6) to be set 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 [2024-11-20 12:38:07.550061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628030 is same with the state(6) to be set 00:24:02.227 starting I/O failed: -6 00:24:02.227 [2024-11-20 12:38:07.550068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628030 is same with the state(6) to be set 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 [2024-11-20 12:38:07.550075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628030 is same with the state(6) to be set 00:24:02.227 starting I/O failed: -6 00:24:02.227 [2024-11-20 12:38:07.550081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2628030 is same with the state(6) to be set 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 [2024-11-20 12:38:07.550346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.227 NVMe io qpair process completion error 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 [2024-11-20 12:38:07.551215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 [2024-11-20 12:38:07.552003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.227 Write completed with error (sct=0, sc=8) 00:24:02.227 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 [2024-11-20 12:38:07.552753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 [2024-11-20 12:38:07.554073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.228 NVMe io qpair process completion error 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 starting I/O failed: -6 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.228 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 [2024-11-20 12:38:07.554913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.229 starting I/O failed: -6 00:24:02.229 starting I/O failed: -6 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 [2024-11-20 12:38:07.555731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 [2024-11-20 12:38:07.556508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.229 Write completed with error (sct=0, sc=8) 00:24:02.229 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 [2024-11-20 12:38:07.557922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.230 NVMe io qpair process completion error 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 [2024-11-20 12:38:07.558991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 [2024-11-20 12:38:07.559773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.230 starting I/O failed: -6 00:24:02.230 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 [2024-11-20 12:38:07.560571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 [2024-11-20 12:38:07.564259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.231 NVMe io qpair process completion error 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 starting I/O failed: -6 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.231 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 [2024-11-20 12:38:07.565123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 [2024-11-20 12:38:07.565861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.232 starting I/O failed: -6 00:24:02.232 starting I/O failed: -6 00:24:02.232 starting I/O failed: -6 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 [2024-11-20 12:38:07.566823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.232 Write completed with error (sct=0, sc=8) 00:24:02.232 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 [2024-11-20 12:38:07.571636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.233 NVMe io qpair process completion error 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 [2024-11-20 12:38:07.572613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 [2024-11-20 12:38:07.573294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.233 starting I/O failed: -6 00:24:02.233 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 [2024-11-20 12:38:07.574148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 [2024-11-20 12:38:07.575420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.234 NVMe io qpair process completion error 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 starting I/O failed: -6 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.234 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 [2024-11-20 12:38:07.576297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 [2024-11-20 12:38:07.577054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 [2024-11-20 12:38:07.577895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.235 starting I/O failed: -6 00:24:02.235 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 [2024-11-20 12:38:07.579570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.236 NVMe io qpair process completion error 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 [2024-11-20 12:38:07.580633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.236 starting I/O failed: -6 00:24:02.236 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 [2024-11-20 12:38:07.581428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 [2024-11-20 12:38:07.582188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 Write completed with error (sct=0, sc=8) 00:24:02.237 starting I/O failed: -6 00:24:02.237 [2024-11-20 12:38:07.588502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.237 NVMe io qpair process completion error 00:24:02.237 Initializing NVMe Controllers 00:24:02.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:02.237 Controller IO queue size 128, less than required. 00:24:02.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:02.237 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:02.238 Controller IO queue size 128, less than required. 00:24:02.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:02.238 Initialization complete. Launching workers. 00:24:02.238 ======================================================== 00:24:02.238 Latency(us) 00:24:02.238 Device Information : IOPS MiB/s Average min max 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2173.41 93.39 58906.88 604.45 104889.91 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2184.81 93.88 58658.19 560.55 110162.05 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2204.39 94.72 58151.17 642.72 111296.05 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2164.38 93.00 58696.56 742.48 98786.64 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2188.04 94.02 58081.97 578.50 98812.45 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2149.97 92.38 59585.21 747.63 108456.49 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2135.55 91.76 59524.14 796.83 99488.05 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2133.19 91.66 59601.74 788.35 99742.04 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2115.77 90.91 60108.20 568.43 100925.39 00:24:02.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2128.24 91.45 59775.04 796.45 101027.20 00:24:02.238 ======================================================== 00:24:02.238 Total : 21577.75 927.17 59100.55 560.55 111296.05 00:24:02.238 00:24:02.238 [2024-11-20 12:38:07.594320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ebc0 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e890 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f410 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x540900 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53e560 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53fa70 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53eef0 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f740 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x540720 is same with the state(6) to be set 00:24:02.238 [2024-11-20 12:38:07.594632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x540ae0 is same with the state(6) to be set 00:24:02.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:02.238 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 997147 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 997147 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 997147 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.176 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.176 rmmod nvme_tcp 00:24:03.436 rmmod nvme_fabrics 00:24:03.436 rmmod nvme_keyring 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 996864 ']' 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 996864 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 996864 ']' 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 996864 00:24:03.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (996864) - No such process 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 996864 is not found' 00:24:03.436 Process with pid 996864 is not found 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.436 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.343 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.343 00:24:05.343 real 0m10.366s 00:24:05.343 user 0m28.635s 00:24:05.343 sys 0m3.831s 00:24:05.343 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.343 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:05.343 ************************************ 00:24:05.343 END TEST nvmf_shutdown_tc4 00:24:05.343 ************************************ 00:24:05.343 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:05.343 00:24:05.343 real 0m42.343s 00:24:05.343 user 1m47.075s 00:24:05.343 sys 0m12.597s 00:24:05.343 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:05.602 ************************************ 00:24:05.602 END TEST nvmf_shutdown 00:24:05.602 ************************************ 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.602 ************************************ 00:24:05.602 START TEST nvmf_nsid 00:24:05.602 ************************************ 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:05.602 * Looking for test storage... 00:24:05.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.602 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:05.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.603 --rc genhtml_branch_coverage=1 00:24:05.603 --rc genhtml_function_coverage=1 00:24:05.603 --rc genhtml_legend=1 00:24:05.603 --rc geninfo_all_blocks=1 00:24:05.603 --rc geninfo_unexecuted_blocks=1 00:24:05.603 00:24:05.603 ' 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:05.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.603 --rc genhtml_branch_coverage=1 00:24:05.603 --rc genhtml_function_coverage=1 00:24:05.603 --rc genhtml_legend=1 00:24:05.603 --rc geninfo_all_blocks=1 00:24:05.603 --rc geninfo_unexecuted_blocks=1 00:24:05.603 00:24:05.603 ' 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:05.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.603 --rc genhtml_branch_coverage=1 00:24:05.603 --rc genhtml_function_coverage=1 00:24:05.603 --rc genhtml_legend=1 00:24:05.603 --rc geninfo_all_blocks=1 00:24:05.603 --rc geninfo_unexecuted_blocks=1 00:24:05.603 00:24:05.603 ' 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:05.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.603 --rc genhtml_branch_coverage=1 00:24:05.603 --rc genhtml_function_coverage=1 00:24:05.603 --rc genhtml_legend=1 00:24:05.603 --rc geninfo_all_blocks=1 00:24:05.603 --rc geninfo_unexecuted_blocks=1 00:24:05.603 00:24:05.603 ' 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.603 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:05.863 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:24:12.437 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:24:12.437 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:24:12.437 Found net devices under 0000:1a:00.0: cvl_0_0 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:24:12.437 Found net devices under 0000:1a:00.1: cvl_0_1 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.437 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:24:12.438 00:24:12.438 --- 10.0.0.2 ping statistics --- 00:24:12.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.438 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:12.438 00:24:12.438 --- 10.0.0.1 ping statistics --- 00:24:12.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.438 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1001943 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1001943 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1001943 ']' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.438 [2024-11-20 12:38:17.601959] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:12.438 [2024-11-20 12:38:17.602000] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.438 [2024-11-20 12:38:17.677302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.438 [2024-11-20 12:38:17.715168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.438 [2024-11-20 12:38:17.715202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.438 [2024-11-20 12:38:17.715208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.438 [2024-11-20 12:38:17.715214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.438 [2024-11-20 12:38:17.715218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.438 [2024-11-20 12:38:17.715791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1002116 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=bcd9545f-3f91-4214-b03c-b876d1cc9e19 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=26e63baa-d185-4017-9a85-16b771c9472b 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d50f4e5c-b3a5-428e-b2d5-44f5d4831561 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.438 null0 00:24:12.438 null1 00:24:12.438 [2024-11-20 12:38:17.893251] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:12.438 [2024-11-20 12:38:17.893291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002116 ] 00:24:12.438 null2 00:24:12.438 [2024-11-20 12:38:17.901727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.438 [2024-11-20 12:38:17.925923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.438 [2024-11-20 12:38:17.962599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1002116 /var/tmp/tgt2.sock 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1002116 ']' 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:12.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.438 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.438 [2024-11-20 12:38:18.004565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.698 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.698 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:12.698 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:12.957 [2024-11-20 12:38:18.511301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.957 [2024-11-20 12:38:18.527442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:12.957 nvme0n1 nvme0n2 00:24:12.957 nvme1n1 00:24:12.957 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:12.957 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:12.957 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:14.339 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid bcd9545f-3f91-4214-b03c-b876d1cc9e19 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bcd9545f3f914214b03cb876d1cc9e19 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BCD9545F3F914214B03CB876D1CC9E19 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ BCD9545F3F914214B03CB876D1CC9E19 == \B\C\D\9\5\4\5\F\3\F\9\1\4\2\1\4\B\0\3\C\B\8\7\6\D\1\C\C\9\E\1\9 ]] 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 26e63baa-d185-4017-9a85-16b771c9472b 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=26e63baad18540179a8516b771c9472b 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 26E63BAAD18540179A8516B771C9472B 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 26E63BAAD18540179A8516B771C9472B == \2\6\E\6\3\B\A\A\D\1\8\5\4\0\1\7\9\A\8\5\1\6\B\7\7\1\C\9\4\7\2\B ]] 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d50f4e5c-b3a5-428e-b2d5-44f5d4831561 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:15.277 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d50f4e5cb3a5428eb2d544f5d4831561 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D50F4E5CB3A5428EB2D544F5D4831561 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D50F4E5CB3A5428EB2D544F5D4831561 == \D\5\0\F\4\E\5\C\B\3\A\5\4\2\8\E\B\2\D\5\4\4\F\5\D\4\8\3\1\5\6\1 ]] 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1002116 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1002116 ']' 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1002116 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1002116 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1002116' 00:24:15.537 killing process with pid 1002116 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1002116 00:24:15.537 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1002116 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.106 rmmod nvme_tcp 00:24:16.106 rmmod nvme_fabrics 00:24:16.106 rmmod nvme_keyring 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1001943 ']' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1001943 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1001943 ']' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1001943 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001943 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001943' 00:24:16.106 killing process with pid 1001943 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1001943 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1001943 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.106 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.643 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.643 00:24:18.643 real 0m12.745s 00:24:18.643 user 0m9.978s 00:24:18.643 sys 0m5.632s 00:24:18.643 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.643 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 ************************************ 00:24:18.643 END TEST nvmf_nsid 00:24:18.643 ************************************ 00:24:18.643 12:38:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:18.643 00:24:18.643 real 12m4.919s 00:24:18.643 user 25m58.604s 00:24:18.643 sys 3m38.372s 00:24:18.643 12:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.643 12:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 ************************************ 00:24:18.643 END TEST nvmf_target_extra 00:24:18.643 ************************************ 00:24:18.643 12:38:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:18.643 12:38:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.643 12:38:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.643 12:38:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 ************************************ 00:24:18.643 START TEST nvmf_host 00:24:18.643 ************************************ 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:18.643 * Looking for test storage... 00:24:18.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.643 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.644 --rc genhtml_branch_coverage=1 00:24:18.644 --rc genhtml_function_coverage=1 00:24:18.644 --rc genhtml_legend=1 00:24:18.644 --rc geninfo_all_blocks=1 00:24:18.644 --rc geninfo_unexecuted_blocks=1 00:24:18.644 00:24:18.644 ' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.644 --rc genhtml_branch_coverage=1 00:24:18.644 --rc genhtml_function_coverage=1 00:24:18.644 --rc genhtml_legend=1 00:24:18.644 --rc geninfo_all_blocks=1 00:24:18.644 --rc geninfo_unexecuted_blocks=1 00:24:18.644 00:24:18.644 ' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.644 --rc genhtml_branch_coverage=1 00:24:18.644 --rc genhtml_function_coverage=1 00:24:18.644 --rc genhtml_legend=1 00:24:18.644 --rc geninfo_all_blocks=1 00:24:18.644 --rc geninfo_unexecuted_blocks=1 00:24:18.644 00:24:18.644 ' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.644 --rc genhtml_branch_coverage=1 00:24:18.644 --rc genhtml_function_coverage=1 00:24:18.644 --rc genhtml_legend=1 00:24:18.644 --rc geninfo_all_blocks=1 00:24:18.644 --rc geninfo_unexecuted_blocks=1 00:24:18.644 00:24:18.644 ' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.644 ************************************ 00:24:18.644 START TEST nvmf_multicontroller 00:24:18.644 ************************************ 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:18.644 * Looking for test storage... 00:24:18.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:18.644 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.905 --rc genhtml_branch_coverage=1 00:24:18.905 --rc genhtml_function_coverage=1 00:24:18.905 --rc genhtml_legend=1 00:24:18.905 --rc geninfo_all_blocks=1 00:24:18.905 --rc geninfo_unexecuted_blocks=1 00:24:18.905 00:24:18.905 ' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.905 --rc genhtml_branch_coverage=1 00:24:18.905 --rc genhtml_function_coverage=1 00:24:18.905 --rc genhtml_legend=1 00:24:18.905 --rc geninfo_all_blocks=1 00:24:18.905 --rc geninfo_unexecuted_blocks=1 00:24:18.905 00:24:18.905 ' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.905 --rc genhtml_branch_coverage=1 00:24:18.905 --rc genhtml_function_coverage=1 00:24:18.905 --rc genhtml_legend=1 00:24:18.905 --rc geninfo_all_blocks=1 00:24:18.905 --rc geninfo_unexecuted_blocks=1 00:24:18.905 00:24:18.905 ' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.905 --rc genhtml_branch_coverage=1 00:24:18.905 --rc genhtml_function_coverage=1 00:24:18.905 --rc genhtml_legend=1 00:24:18.905 --rc geninfo_all_blocks=1 00:24:18.905 --rc geninfo_unexecuted_blocks=1 00:24:18.905 00:24:18.905 ' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.905 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.906 12:38:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.477 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.477 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.477 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.477 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:24:25.478 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:24:25.478 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:24:25.478 Found net devices under 0000:1a:00.0: cvl_0_0 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:24:25.478 Found net devices under 0000:1a:00.1: cvl_0_1 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:24:25.478 00:24:25.478 --- 10.0.0.2 ping statistics --- 00:24:25.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.478 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:24:25.478 00:24:25.478 --- 10.0.0.1 ping statistics --- 00:24:25.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.478 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.478 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1006608 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1006608 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1006608 ']' 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.479 12:38:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.479 [2024-11-20 12:38:30.690504] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:25.479 [2024-11-20 12:38:30.690549] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.479 [2024-11-20 12:38:30.768657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:25.479 [2024-11-20 12:38:30.807682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.479 [2024-11-20 12:38:30.807718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.479 [2024-11-20 12:38:30.807725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.479 [2024-11-20 12:38:30.807731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.479 [2024-11-20 12:38:30.807736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.479 [2024-11-20 12:38:30.809219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.479 [2024-11-20 12:38:30.809332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.479 [2024-11-20 12:38:30.809333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 [2024-11-20 12:38:31.556539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 Malloc0 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 [2024-11-20 12:38:31.619717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 [2024-11-20 12:38:31.627619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 Malloc1 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1006875 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1006875 /var/tmp/bdevperf.sock 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1006875 ']' 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.047 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.307 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.307 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:26.307 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:26.307 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.307 12:38:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.567 NVMe0n1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.567 1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.567 request: 00:24:26.567 { 00:24:26.567 "name": "NVMe0", 00:24:26.567 "trtype": "tcp", 00:24:26.567 "traddr": "10.0.0.2", 00:24:26.567 "adrfam": "ipv4", 00:24:26.567 "trsvcid": "4420", 00:24:26.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.567 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:26.567 "hostaddr": "10.0.0.1", 00:24:26.567 "prchk_reftag": false, 00:24:26.567 "prchk_guard": false, 00:24:26.567 "hdgst": false, 00:24:26.567 "ddgst": false, 00:24:26.567 "allow_unrecognized_csi": false, 00:24:26.567 "method": "bdev_nvme_attach_controller", 00:24:26.567 "req_id": 1 00:24:26.567 } 00:24:26.567 Got JSON-RPC error response 00:24:26.567 response: 00:24:26.567 { 00:24:26.567 "code": -114, 00:24:26.567 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:26.567 } 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.567 request: 00:24:26.567 { 00:24:26.567 "name": "NVMe0", 00:24:26.567 "trtype": "tcp", 00:24:26.567 "traddr": "10.0.0.2", 00:24:26.567 "adrfam": "ipv4", 00:24:26.567 "trsvcid": "4420", 00:24:26.567 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:26.567 "hostaddr": "10.0.0.1", 00:24:26.567 "prchk_reftag": false, 00:24:26.567 "prchk_guard": false, 00:24:26.567 "hdgst": false, 00:24:26.567 "ddgst": false, 00:24:26.567 "allow_unrecognized_csi": false, 00:24:26.567 "method": "bdev_nvme_attach_controller", 00:24:26.567 "req_id": 1 00:24:26.567 } 00:24:26.567 Got JSON-RPC error response 00:24:26.567 response: 00:24:26.567 { 00:24:26.567 "code": -114, 00:24:26.567 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:26.567 } 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.567 request: 00:24:26.567 { 00:24:26.567 "name": "NVMe0", 00:24:26.567 "trtype": "tcp", 00:24:26.567 "traddr": "10.0.0.2", 00:24:26.567 "adrfam": "ipv4", 00:24:26.567 "trsvcid": "4420", 00:24:26.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.567 "hostaddr": "10.0.0.1", 00:24:26.567 "prchk_reftag": false, 00:24:26.567 "prchk_guard": false, 00:24:26.567 "hdgst": false, 00:24:26.567 "ddgst": false, 00:24:26.567 "multipath": "disable", 00:24:26.567 "allow_unrecognized_csi": false, 00:24:26.567 "method": "bdev_nvme_attach_controller", 00:24:26.567 "req_id": 1 00:24:26.567 } 00:24:26.567 Got JSON-RPC error response 00:24:26.567 response: 00:24:26.567 { 00:24:26.567 "code": -114, 00:24:26.567 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:26.567 } 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:26.567 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.568 request: 00:24:26.568 { 00:24:26.568 "name": "NVMe0", 00:24:26.568 "trtype": "tcp", 00:24:26.568 "traddr": "10.0.0.2", 00:24:26.568 "adrfam": "ipv4", 00:24:26.568 "trsvcid": "4420", 00:24:26.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.568 "hostaddr": "10.0.0.1", 00:24:26.568 "prchk_reftag": false, 00:24:26.568 "prchk_guard": false, 00:24:26.568 "hdgst": false, 00:24:26.568 "ddgst": false, 00:24:26.568 "multipath": "failover", 00:24:26.568 "allow_unrecognized_csi": false, 00:24:26.568 "method": "bdev_nvme_attach_controller", 00:24:26.568 "req_id": 1 00:24:26.568 } 00:24:26.568 Got JSON-RPC error response 00:24:26.568 response: 00:24:26.568 { 00:24:26.568 "code": -114, 00:24:26.568 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:26.568 } 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.568 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.827 NVMe0n1 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.827 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:26.827 12:38:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.203 { 00:24:28.203 "results": [ 00:24:28.203 { 00:24:28.203 "job": "NVMe0n1", 00:24:28.203 "core_mask": "0x1", 00:24:28.203 "workload": "write", 00:24:28.203 "status": "finished", 00:24:28.203 "queue_depth": 128, 00:24:28.203 "io_size": 4096, 00:24:28.203 "runtime": 1.0045, 00:24:28.203 "iops": 27393.728222996517, 00:24:28.203 "mibps": 107.00675087108014, 00:24:28.203 "io_failed": 0, 00:24:28.203 "io_timeout": 0, 00:24:28.203 "avg_latency_us": 4663.498041475187, 00:24:28.203 "min_latency_us": 2770.3854545454546, 00:24:28.203 "max_latency_us": 11021.963636363636 00:24:28.203 } 00:24:28.203 ], 00:24:28.203 "core_count": 1 00:24:28.203 } 00:24:28.203 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:28.203 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.203 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.203 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.203 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:28.203 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1006875 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1006875 ']' 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1006875 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1006875 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1006875' 00:24:28.204 killing process with pid 1006875 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1006875 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1006875 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:28.204 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:28.204 [2024-11-20 12:38:31.731639] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:28.204 [2024-11-20 12:38:31.731691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006875 ] 00:24:28.204 [2024-11-20 12:38:31.805920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.204 [2024-11-20 12:38:31.846355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.204 [2024-11-20 12:38:32.493079] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 1f506282-e748-487d-9699-6085229e0a55 already exists 00:24:28.204 [2024-11-20 12:38:32.493105] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:1f506282-e748-487d-9699-6085229e0a55 alias for bdev NVMe1n1 00:24:28.204 [2024-11-20 12:38:32.493112] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:28.204 Running I/O for 1 seconds... 00:24:28.204 27325.00 IOPS, 106.74 MiB/s 00:24:28.204 Latency(us) 00:24:28.204 [2024-11-20T11:38:33.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.204 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:28.204 NVMe0n1 : 1.00 27393.73 107.01 0.00 0.00 4663.50 2770.39 11021.96 00:24:28.204 [2024-11-20T11:38:33.968Z] =================================================================================================================== 00:24:28.204 [2024-11-20T11:38:33.968Z] Total : 27393.73 107.01 0.00 0.00 4663.50 2770.39 11021.96 00:24:28.204 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.204 00:24:28.204 Latency(us) 00:24:28.204 [2024-11-20T11:38:33.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.204 [2024-11-20T11:38:33.968Z] =================================================================================================================== 00:24:28.204 [2024-11-20T11:38:33.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.204 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.204 rmmod nvme_tcp 00:24:28.204 rmmod nvme_fabrics 00:24:28.204 rmmod nvme_keyring 00:24:28.204 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1006608 ']' 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1006608 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1006608 ']' 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1006608 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.463 12:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1006608 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1006608' 00:24:28.463 killing process with pid 1006608 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1006608 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1006608 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:28.463 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.722 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.722 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.722 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.722 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.722 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.722 12:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.761 12:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.761 00:24:30.761 real 0m12.013s 00:24:30.762 user 0m14.446s 00:24:30.762 sys 0m5.320s 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:30.762 ************************************ 00:24:30.762 END TEST nvmf_multicontroller 00:24:30.762 ************************************ 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.762 ************************************ 00:24:30.762 START TEST nvmf_aer 00:24:30.762 ************************************ 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.762 * Looking for test storage... 00:24:30.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.762 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.080 --rc genhtml_branch_coverage=1 00:24:31.080 --rc genhtml_function_coverage=1 00:24:31.080 --rc genhtml_legend=1 00:24:31.080 --rc geninfo_all_blocks=1 00:24:31.080 --rc geninfo_unexecuted_blocks=1 00:24:31.080 00:24:31.080 ' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.080 --rc genhtml_branch_coverage=1 00:24:31.080 --rc genhtml_function_coverage=1 00:24:31.080 --rc genhtml_legend=1 00:24:31.080 --rc geninfo_all_blocks=1 00:24:31.080 --rc geninfo_unexecuted_blocks=1 00:24:31.080 00:24:31.080 ' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.080 --rc genhtml_branch_coverage=1 00:24:31.080 --rc genhtml_function_coverage=1 00:24:31.080 --rc genhtml_legend=1 00:24:31.080 --rc geninfo_all_blocks=1 00:24:31.080 --rc geninfo_unexecuted_blocks=1 00:24:31.080 00:24:31.080 ' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.080 --rc genhtml_branch_coverage=1 00:24:31.080 --rc genhtml_function_coverage=1 00:24:31.080 --rc genhtml_legend=1 00:24:31.080 --rc geninfo_all_blocks=1 00:24:31.080 --rc geninfo_unexecuted_blocks=1 00:24:31.080 00:24:31.080 ' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.080 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.081 12:38:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:24:37.658 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:24:37.658 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:24:37.658 Found net devices under 0000:1a:00.0: cvl_0_0 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:24:37.658 Found net devices under 0000:1a:00.1: cvl_0_1 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.658 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:24:37.659 00:24:37.659 --- 10.0.0.2 ping statistics --- 00:24:37.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.659 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:37.659 00:24:37.659 --- 10.0.0.1 ping statistics --- 00:24:37.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.659 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1010944 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1010944 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1010944 ']' 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.659 12:38:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:37.659 [2024-11-20 12:38:42.811987] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:37.659 [2024-11-20 12:38:42.812033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.659 [2024-11-20 12:38:42.888403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.659 [2024-11-20 12:38:42.928356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.659 [2024-11-20 12:38:42.928391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.659 [2024-11-20 12:38:42.928397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.659 [2024-11-20 12:38:42.928402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.659 [2024-11-20 12:38:42.928407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.659 [2024-11-20 12:38:42.929960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.659 [2024-11-20 12:38:42.930079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.659 [2024-11-20 12:38:42.930231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.659 [2024-11-20 12:38:42.930232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.918 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:37.918 [2024-11-20 12:38:43.678192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.178 Malloc0 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.178 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.179 [2024-11-20 12:38:43.741645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.179 [ 00:24:38.179 { 00:24:38.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.179 "subtype": "Discovery", 00:24:38.179 "listen_addresses": [], 00:24:38.179 "allow_any_host": true, 00:24:38.179 "hosts": [] 00:24:38.179 }, 00:24:38.179 { 00:24:38.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.179 "subtype": "NVMe", 00:24:38.179 "listen_addresses": [ 00:24:38.179 { 00:24:38.179 "trtype": "TCP", 00:24:38.179 "adrfam": "IPv4", 00:24:38.179 "traddr": "10.0.0.2", 00:24:38.179 "trsvcid": "4420" 00:24:38.179 } 00:24:38.179 ], 00:24:38.179 "allow_any_host": true, 00:24:38.179 "hosts": [], 00:24:38.179 "serial_number": "SPDK00000000000001", 00:24:38.179 "model_number": "SPDK bdev Controller", 00:24:38.179 "max_namespaces": 2, 00:24:38.179 "min_cntlid": 1, 00:24:38.179 "max_cntlid": 65519, 00:24:38.179 "namespaces": [ 00:24:38.179 { 00:24:38.179 "nsid": 1, 00:24:38.179 "bdev_name": "Malloc0", 00:24:38.179 "name": "Malloc0", 00:24:38.179 "nguid": "9877757D69B141169FBD46FC8B05F6A9", 00:24:38.179 "uuid": "9877757d-69b1-4116-9fbd-46fc8b05f6a9" 00:24:38.179 } 00:24:38.179 ] 00:24:38.179 } 00:24:38.179 ] 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1011153 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:38.179 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.439 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.439 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.439 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:38.439 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:38.439 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.439 12:38:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 Malloc1 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 Asynchronous Event Request test 00:24:38.439 Attaching to 10.0.0.2 00:24:38.439 Attached to 10.0.0.2 00:24:38.439 Registering asynchronous event callbacks... 00:24:38.439 Starting namespace attribute notice tests for all controllers... 00:24:38.439 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:38.439 aer_cb - Changed Namespace 00:24:38.439 Cleaning up... 00:24:38.439 [ 00:24:38.439 { 00:24:38.439 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.439 "subtype": "Discovery", 00:24:38.439 "listen_addresses": [], 00:24:38.439 "allow_any_host": true, 00:24:38.439 "hosts": [] 00:24:38.439 }, 00:24:38.439 { 00:24:38.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.439 "subtype": "NVMe", 00:24:38.439 "listen_addresses": [ 00:24:38.439 { 00:24:38.439 "trtype": "TCP", 00:24:38.439 "adrfam": "IPv4", 00:24:38.439 "traddr": "10.0.0.2", 00:24:38.439 "trsvcid": "4420" 00:24:38.439 } 00:24:38.439 ], 00:24:38.439 "allow_any_host": true, 00:24:38.439 "hosts": [], 00:24:38.439 "serial_number": "SPDK00000000000001", 00:24:38.439 "model_number": "SPDK bdev Controller", 00:24:38.439 "max_namespaces": 2, 00:24:38.439 "min_cntlid": 1, 00:24:38.439 "max_cntlid": 65519, 00:24:38.439 "namespaces": [ 00:24:38.439 { 00:24:38.439 "nsid": 1, 00:24:38.439 "bdev_name": "Malloc0", 00:24:38.439 "name": "Malloc0", 00:24:38.439 "nguid": "9877757D69B141169FBD46FC8B05F6A9", 00:24:38.439 "uuid": "9877757d-69b1-4116-9fbd-46fc8b05f6a9" 00:24:38.439 }, 00:24:38.439 { 00:24:38.439 "nsid": 2, 00:24:38.439 "bdev_name": "Malloc1", 00:24:38.439 "name": "Malloc1", 00:24:38.439 "nguid": "77536758997F4D7A8B8850F90BB01D2A", 00:24:38.439 "uuid": "77536758-997f-4d7a-8b88-50f90bb01d2a" 00:24:38.439 } 00:24:38.439 ] 00:24:38.439 } 00:24:38.439 ] 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1011153 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.440 rmmod nvme_tcp 00:24:38.440 rmmod nvme_fabrics 00:24:38.440 rmmod nvme_keyring 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1010944 ']' 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1010944 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1010944 ']' 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1010944 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.440 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010944 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010944' 00:24:38.699 killing process with pid 1010944 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1010944 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1010944 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.699 12:38:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.236 00:24:41.236 real 0m10.082s 00:24:41.236 user 0m7.674s 00:24:41.236 sys 0m5.144s 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.236 ************************************ 00:24:41.236 END TEST nvmf_aer 00:24:41.236 ************************************ 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.236 ************************************ 00:24:41.236 START TEST nvmf_async_init 00:24:41.236 ************************************ 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:41.236 * Looking for test storage... 00:24:41.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:41.236 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.237 --rc genhtml_branch_coverage=1 00:24:41.237 --rc genhtml_function_coverage=1 00:24:41.237 --rc genhtml_legend=1 00:24:41.237 --rc geninfo_all_blocks=1 00:24:41.237 --rc geninfo_unexecuted_blocks=1 00:24:41.237 00:24:41.237 ' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.237 --rc genhtml_branch_coverage=1 00:24:41.237 --rc genhtml_function_coverage=1 00:24:41.237 --rc genhtml_legend=1 00:24:41.237 --rc geninfo_all_blocks=1 00:24:41.237 --rc geninfo_unexecuted_blocks=1 00:24:41.237 00:24:41.237 ' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.237 --rc genhtml_branch_coverage=1 00:24:41.237 --rc genhtml_function_coverage=1 00:24:41.237 --rc genhtml_legend=1 00:24:41.237 --rc geninfo_all_blocks=1 00:24:41.237 --rc geninfo_unexecuted_blocks=1 00:24:41.237 00:24:41.237 ' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.237 --rc genhtml_branch_coverage=1 00:24:41.237 --rc genhtml_function_coverage=1 00:24:41.237 --rc genhtml_legend=1 00:24:41.237 --rc geninfo_all_blocks=1 00:24:41.237 --rc geninfo_unexecuted_blocks=1 00:24:41.237 00:24:41.237 ' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=24373d8e377c4a24946183f9cde9c31f 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.237 12:38:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:24:47.810 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:24:47.810 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.810 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:24:47.811 Found net devices under 0000:1a:00.0: cvl_0_0 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:24:47.811 Found net devices under 0000:1a:00.1: cvl_0_1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:24:47.811 00:24:47.811 --- 10.0.0.2 ping statistics --- 00:24:47.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.811 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:24:47.811 00:24:47.811 --- 10.0.0.1 ping statistics --- 00:24:47.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.811 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1014927 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1014927 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1014927 ']' 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.811 12:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.811 [2024-11-20 12:38:52.960835] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:47.811 [2024-11-20 12:38:52.960879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.811 [2024-11-20 12:38:53.038277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.811 [2024-11-20 12:38:53.076684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.811 [2024-11-20 12:38:53.076716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.811 [2024-11-20 12:38:53.076722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.811 [2024-11-20 12:38:53.076727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.811 [2024-11-20 12:38:53.076732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.811 [2024-11-20 12:38:53.077328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.070 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.071 [2024-11-20 12:38:53.815421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.071 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.071 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:48.071 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.071 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.329 null0 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 24373d8e377c4a24946183f9cde9c31f 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.329 [2024-11-20 12:38:53.867708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.329 12:38:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.589 nvme0n1 00:24:48.589 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.589 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:48.589 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.589 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.589 [ 00:24:48.589 { 00:24:48.589 "name": "nvme0n1", 00:24:48.589 "aliases": [ 00:24:48.589 "24373d8e-377c-4a24-9461-83f9cde9c31f" 00:24:48.589 ], 00:24:48.589 "product_name": "NVMe disk", 00:24:48.589 "block_size": 512, 00:24:48.589 "num_blocks": 2097152, 00:24:48.589 "uuid": "24373d8e-377c-4a24-9461-83f9cde9c31f", 00:24:48.589 "numa_id": 0, 00:24:48.589 "assigned_rate_limits": { 00:24:48.589 "rw_ios_per_sec": 0, 00:24:48.589 "rw_mbytes_per_sec": 0, 00:24:48.589 "r_mbytes_per_sec": 0, 00:24:48.589 "w_mbytes_per_sec": 0 00:24:48.589 }, 00:24:48.589 "claimed": false, 00:24:48.589 "zoned": false, 00:24:48.589 "supported_io_types": { 00:24:48.589 "read": true, 00:24:48.589 "write": true, 00:24:48.589 "unmap": false, 00:24:48.589 "flush": true, 00:24:48.589 "reset": true, 00:24:48.589 "nvme_admin": true, 00:24:48.589 "nvme_io": true, 00:24:48.589 "nvme_io_md": false, 00:24:48.589 "write_zeroes": true, 00:24:48.589 "zcopy": false, 00:24:48.589 "get_zone_info": false, 00:24:48.589 "zone_management": false, 00:24:48.589 "zone_append": false, 00:24:48.589 "compare": true, 00:24:48.589 "compare_and_write": true, 00:24:48.589 "abort": true, 00:24:48.589 "seek_hole": false, 00:24:48.589 "seek_data": false, 00:24:48.589 "copy": true, 00:24:48.589 "nvme_iov_md": false 00:24:48.589 }, 00:24:48.589 "memory_domains": [ 00:24:48.589 { 00:24:48.589 "dma_device_id": "system", 00:24:48.589 "dma_device_type": 1 00:24:48.589 } 00:24:48.589 ], 00:24:48.589 "driver_specific": { 00:24:48.589 "nvme": [ 00:24:48.589 { 00:24:48.589 "trid": { 00:24:48.589 "trtype": "TCP", 00:24:48.589 "adrfam": "IPv4", 00:24:48.589 "traddr": "10.0.0.2", 00:24:48.589 "trsvcid": "4420", 00:24:48.589 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:48.589 }, 00:24:48.589 "ctrlr_data": { 00:24:48.589 "cntlid": 1, 00:24:48.589 "vendor_id": "0x8086", 00:24:48.589 "model_number": "SPDK bdev Controller", 00:24:48.590 "serial_number": "00000000000000000000", 00:24:48.590 "firmware_revision": "25.01", 00:24:48.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.590 "oacs": { 00:24:48.590 "security": 0, 00:24:48.590 "format": 0, 00:24:48.590 "firmware": 0, 00:24:48.590 "ns_manage": 0 00:24:48.590 }, 00:24:48.590 "multi_ctrlr": true, 00:24:48.590 "ana_reporting": false 00:24:48.590 }, 00:24:48.590 "vs": { 00:24:48.590 "nvme_version": "1.3" 00:24:48.590 }, 00:24:48.590 "ns_data": { 00:24:48.590 "id": 1, 00:24:48.590 "can_share": true 00:24:48.590 } 00:24:48.590 } 00:24:48.590 ], 00:24:48.590 "mp_policy": "active_passive" 00:24:48.590 } 00:24:48.590 } 00:24:48.590 ] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.590 [2024-11-20 12:38:54.132216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:48.590 [2024-11-20 12:38:54.132266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf2660 (9): Bad file descriptor 00:24:48.590 [2024-11-20 12:38:54.264487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.590 [ 00:24:48.590 { 00:24:48.590 "name": "nvme0n1", 00:24:48.590 "aliases": [ 00:24:48.590 "24373d8e-377c-4a24-9461-83f9cde9c31f" 00:24:48.590 ], 00:24:48.590 "product_name": "NVMe disk", 00:24:48.590 "block_size": 512, 00:24:48.590 "num_blocks": 2097152, 00:24:48.590 "uuid": "24373d8e-377c-4a24-9461-83f9cde9c31f", 00:24:48.590 "numa_id": 0, 00:24:48.590 "assigned_rate_limits": { 00:24:48.590 "rw_ios_per_sec": 0, 00:24:48.590 "rw_mbytes_per_sec": 0, 00:24:48.590 "r_mbytes_per_sec": 0, 00:24:48.590 "w_mbytes_per_sec": 0 00:24:48.590 }, 00:24:48.590 "claimed": false, 00:24:48.590 "zoned": false, 00:24:48.590 "supported_io_types": { 00:24:48.590 "read": true, 00:24:48.590 "write": true, 00:24:48.590 "unmap": false, 00:24:48.590 "flush": true, 00:24:48.590 "reset": true, 00:24:48.590 "nvme_admin": true, 00:24:48.590 "nvme_io": true, 00:24:48.590 "nvme_io_md": false, 00:24:48.590 "write_zeroes": true, 00:24:48.590 "zcopy": false, 00:24:48.590 "get_zone_info": false, 00:24:48.590 "zone_management": false, 00:24:48.590 "zone_append": false, 00:24:48.590 "compare": true, 00:24:48.590 "compare_and_write": true, 00:24:48.590 "abort": true, 00:24:48.590 "seek_hole": false, 00:24:48.590 "seek_data": false, 00:24:48.590 "copy": true, 00:24:48.590 "nvme_iov_md": false 00:24:48.590 }, 00:24:48.590 "memory_domains": [ 00:24:48.590 { 00:24:48.590 "dma_device_id": "system", 00:24:48.590 "dma_device_type": 1 00:24:48.590 } 00:24:48.590 ], 00:24:48.590 "driver_specific": { 00:24:48.590 "nvme": [ 00:24:48.590 { 00:24:48.590 "trid": { 00:24:48.590 "trtype": "TCP", 00:24:48.590 "adrfam": "IPv4", 00:24:48.590 "traddr": "10.0.0.2", 00:24:48.590 "trsvcid": "4420", 00:24:48.590 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:48.590 }, 00:24:48.590 "ctrlr_data": { 00:24:48.590 "cntlid": 2, 00:24:48.590 "vendor_id": "0x8086", 00:24:48.590 "model_number": "SPDK bdev Controller", 00:24:48.590 "serial_number": "00000000000000000000", 00:24:48.590 "firmware_revision": "25.01", 00:24:48.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.590 "oacs": { 00:24:48.590 "security": 0, 00:24:48.590 "format": 0, 00:24:48.590 "firmware": 0, 00:24:48.590 "ns_manage": 0 00:24:48.590 }, 00:24:48.590 "multi_ctrlr": true, 00:24:48.590 "ana_reporting": false 00:24:48.590 }, 00:24:48.590 "vs": { 00:24:48.590 "nvme_version": "1.3" 00:24:48.590 }, 00:24:48.590 "ns_data": { 00:24:48.590 "id": 1, 00:24:48.590 "can_share": true 00:24:48.590 } 00:24:48.590 } 00:24:48.590 ], 00:24:48.590 "mp_policy": "active_passive" 00:24:48.590 } 00:24:48.590 } 00:24:48.590 ] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.I8OAvCKm1W 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.I8OAvCKm1W 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.I8OAvCKm1W 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.590 [2024-11-20 12:38:54.336818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.590 [2024-11-20 12:38:54.336910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.590 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.591 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.591 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.591 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.591 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.850 [2024-11-20 12:38:54.356885] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.850 nvme0n1 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.850 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.850 [ 00:24:48.850 { 00:24:48.850 "name": "nvme0n1", 00:24:48.850 "aliases": [ 00:24:48.850 "24373d8e-377c-4a24-9461-83f9cde9c31f" 00:24:48.850 ], 00:24:48.850 "product_name": "NVMe disk", 00:24:48.850 "block_size": 512, 00:24:48.850 "num_blocks": 2097152, 00:24:48.850 "uuid": "24373d8e-377c-4a24-9461-83f9cde9c31f", 00:24:48.850 "numa_id": 0, 00:24:48.850 "assigned_rate_limits": { 00:24:48.850 "rw_ios_per_sec": 0, 00:24:48.850 "rw_mbytes_per_sec": 0, 00:24:48.850 "r_mbytes_per_sec": 0, 00:24:48.850 "w_mbytes_per_sec": 0 00:24:48.850 }, 00:24:48.850 "claimed": false, 00:24:48.850 "zoned": false, 00:24:48.850 "supported_io_types": { 00:24:48.850 "read": true, 00:24:48.851 "write": true, 00:24:48.851 "unmap": false, 00:24:48.851 "flush": true, 00:24:48.851 "reset": true, 00:24:48.851 "nvme_admin": true, 00:24:48.851 "nvme_io": true, 00:24:48.851 "nvme_io_md": false, 00:24:48.851 "write_zeroes": true, 00:24:48.851 "zcopy": false, 00:24:48.851 "get_zone_info": false, 00:24:48.851 "zone_management": false, 00:24:48.851 "zone_append": false, 00:24:48.851 "compare": true, 00:24:48.851 "compare_and_write": true, 00:24:48.851 "abort": true, 00:24:48.851 "seek_hole": false, 00:24:48.851 "seek_data": false, 00:24:48.851 "copy": true, 00:24:48.851 "nvme_iov_md": false 00:24:48.851 }, 00:24:48.851 "memory_domains": [ 00:24:48.851 { 00:24:48.851 "dma_device_id": "system", 00:24:48.851 "dma_device_type": 1 00:24:48.851 } 00:24:48.851 ], 00:24:48.851 "driver_specific": { 00:24:48.851 "nvme": [ 00:24:48.851 { 00:24:48.851 "trid": { 00:24:48.851 "trtype": "TCP", 00:24:48.851 "adrfam": "IPv4", 00:24:48.851 "traddr": "10.0.0.2", 00:24:48.851 "trsvcid": "4421", 00:24:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:48.851 }, 00:24:48.851 "ctrlr_data": { 00:24:48.851 "cntlid": 3, 00:24:48.851 "vendor_id": "0x8086", 00:24:48.851 "model_number": "SPDK bdev Controller", 00:24:48.851 "serial_number": "00000000000000000000", 00:24:48.851 "firmware_revision": "25.01", 00:24:48.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.851 "oacs": { 00:24:48.851 "security": 0, 00:24:48.851 "format": 0, 00:24:48.851 "firmware": 0, 00:24:48.851 "ns_manage": 0 00:24:48.851 }, 00:24:48.851 "multi_ctrlr": true, 00:24:48.851 "ana_reporting": false 00:24:48.851 }, 00:24:48.851 "vs": { 00:24:48.851 "nvme_version": "1.3" 00:24:48.851 }, 00:24:48.851 "ns_data": { 00:24:48.851 "id": 1, 00:24:48.851 "can_share": true 00:24:48.851 } 00:24:48.851 } 00:24:48.851 ], 00:24:48.851 "mp_policy": "active_passive" 00:24:48.851 } 00:24:48.851 } 00:24:48.851 ] 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.I8OAvCKm1W 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.851 rmmod nvme_tcp 00:24:48.851 rmmod nvme_fabrics 00:24:48.851 rmmod nvme_keyring 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1014927 ']' 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1014927 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1014927 ']' 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1014927 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014927 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014927' 00:24:48.851 killing process with pid 1014927 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1014927 00:24:48.851 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1014927 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.111 12:38:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.050 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.050 00:24:51.050 real 0m10.261s 00:24:51.050 user 0m3.811s 00:24:51.050 sys 0m5.025s 00:24:51.050 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.050 12:38:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.050 ************************************ 00:24:51.050 END TEST nvmf_async_init 00:24:51.050 ************************************ 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.310 ************************************ 00:24:51.310 START TEST dma 00:24:51.310 ************************************ 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:51.310 * Looking for test storage... 00:24:51.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.310 12:38:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.310 --rc genhtml_branch_coverage=1 00:24:51.310 --rc genhtml_function_coverage=1 00:24:51.310 --rc genhtml_legend=1 00:24:51.310 --rc geninfo_all_blocks=1 00:24:51.310 --rc geninfo_unexecuted_blocks=1 00:24:51.310 00:24:51.310 ' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.310 --rc genhtml_branch_coverage=1 00:24:51.310 --rc genhtml_function_coverage=1 00:24:51.310 --rc genhtml_legend=1 00:24:51.310 --rc geninfo_all_blocks=1 00:24:51.310 --rc geninfo_unexecuted_blocks=1 00:24:51.310 00:24:51.310 ' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.310 --rc genhtml_branch_coverage=1 00:24:51.310 --rc genhtml_function_coverage=1 00:24:51.310 --rc genhtml_legend=1 00:24:51.310 --rc geninfo_all_blocks=1 00:24:51.310 --rc geninfo_unexecuted_blocks=1 00:24:51.310 00:24:51.310 ' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.310 --rc genhtml_branch_coverage=1 00:24:51.310 --rc genhtml_function_coverage=1 00:24:51.310 --rc genhtml_legend=1 00:24:51.310 --rc geninfo_all_blocks=1 00:24:51.310 --rc geninfo_unexecuted_blocks=1 00:24:51.310 00:24:51.310 ' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.310 12:38:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.311 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:51.570 00:24:51.570 real 0m0.206s 00:24:51.570 user 0m0.132s 00:24:51.570 sys 0m0.089s 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:51.570 ************************************ 00:24:51.570 END TEST dma 00:24:51.570 ************************************ 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.570 ************************************ 00:24:51.570 START TEST nvmf_identify 00:24:51.570 ************************************ 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:51.570 * Looking for test storage... 00:24:51.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.570 --rc genhtml_branch_coverage=1 00:24:51.570 --rc genhtml_function_coverage=1 00:24:51.570 --rc genhtml_legend=1 00:24:51.570 --rc geninfo_all_blocks=1 00:24:51.570 --rc geninfo_unexecuted_blocks=1 00:24:51.570 00:24:51.570 ' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.570 --rc genhtml_branch_coverage=1 00:24:51.570 --rc genhtml_function_coverage=1 00:24:51.570 --rc genhtml_legend=1 00:24:51.570 --rc geninfo_all_blocks=1 00:24:51.570 --rc geninfo_unexecuted_blocks=1 00:24:51.570 00:24:51.570 ' 00:24:51.570 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.570 --rc genhtml_branch_coverage=1 00:24:51.570 --rc genhtml_function_coverage=1 00:24:51.570 --rc genhtml_legend=1 00:24:51.570 --rc geninfo_all_blocks=1 00:24:51.570 --rc geninfo_unexecuted_blocks=1 00:24:51.571 00:24:51.571 ' 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.571 --rc genhtml_branch_coverage=1 00:24:51.571 --rc genhtml_function_coverage=1 00:24:51.571 --rc genhtml_legend=1 00:24:51.571 --rc geninfo_all_blocks=1 00:24:51.571 --rc geninfo_unexecuted_blocks=1 00:24:51.571 00:24:51.571 ' 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.571 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.832 12:38:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:24:58.402 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:24:58.402 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.402 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:24:58.403 Found net devices under 0000:1a:00.0: cvl_0_0 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:24:58.403 Found net devices under 0000:1a:00.1: cvl_0_1 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:24:58.403 00:24:58.403 --- 10.0.0.2 ping statistics --- 00:24:58.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.403 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:58.403 00:24:58.403 --- 10.0.0.1 ping statistics --- 00:24:58.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.403 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1019114 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1019114 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1019114 ']' 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.403 12:39:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.403 [2024-11-20 12:39:03.545867] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:58.403 [2024-11-20 12:39:03.545907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.403 [2024-11-20 12:39:03.621392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.403 [2024-11-20 12:39:03.661658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.403 [2024-11-20 12:39:03.661693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.403 [2024-11-20 12:39:03.661699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.403 [2024-11-20 12:39:03.661706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.403 [2024-11-20 12:39:03.661710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.403 [2024-11-20 12:39:03.663321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.403 [2024-11-20 12:39:03.663458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.403 [2024-11-20 12:39:03.663501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.403 [2024-11-20 12:39:03.663502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.663 [2024-11-20 12:39:04.367484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.663 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.923 Malloc0 00:24:58.923 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.923 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.923 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.923 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.923 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.923 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.924 [2024-11-20 12:39:04.467873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.924 [ 00:24:58.924 { 00:24:58.924 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:58.924 "subtype": "Discovery", 00:24:58.924 "listen_addresses": [ 00:24:58.924 { 00:24:58.924 "trtype": "TCP", 00:24:58.924 "adrfam": "IPv4", 00:24:58.924 "traddr": "10.0.0.2", 00:24:58.924 "trsvcid": "4420" 00:24:58.924 } 00:24:58.924 ], 00:24:58.924 "allow_any_host": true, 00:24:58.924 "hosts": [] 00:24:58.924 }, 00:24:58.924 { 00:24:58.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.924 "subtype": "NVMe", 00:24:58.924 "listen_addresses": [ 00:24:58.924 { 00:24:58.924 "trtype": "TCP", 00:24:58.924 "adrfam": "IPv4", 00:24:58.924 "traddr": "10.0.0.2", 00:24:58.924 "trsvcid": "4420" 00:24:58.924 } 00:24:58.924 ], 00:24:58.924 "allow_any_host": true, 00:24:58.924 "hosts": [], 00:24:58.924 "serial_number": "SPDK00000000000001", 00:24:58.924 "model_number": "SPDK bdev Controller", 00:24:58.924 "max_namespaces": 32, 00:24:58.924 "min_cntlid": 1, 00:24:58.924 "max_cntlid": 65519, 00:24:58.924 "namespaces": [ 00:24:58.924 { 00:24:58.924 "nsid": 1, 00:24:58.924 "bdev_name": "Malloc0", 00:24:58.924 "name": "Malloc0", 00:24:58.924 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:58.924 "eui64": "ABCDEF0123456789", 00:24:58.924 "uuid": "7e5f8246-3ddd-47ec-b143-3b5a3f7491dd" 00:24:58.924 } 00:24:58.924 ] 00:24:58.924 } 00:24:58.924 ] 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.924 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:58.924 [2024-11-20 12:39:04.521809] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:58.924 [2024-11-20 12:39:04.521858] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019257 ] 00:24:58.924 [2024-11-20 12:39:04.560705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:58.924 [2024-11-20 12:39:04.560747] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:58.924 [2024-11-20 12:39:04.560751] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:58.924 [2024-11-20 12:39:04.560762] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:58.924 [2024-11-20 12:39:04.560770] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:58.924 [2024-11-20 12:39:04.561421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:58.924 [2024-11-20 12:39:04.561455] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18c3690 0 00:24:58.924 [2024-11-20 12:39:04.571422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:58.924 [2024-11-20 12:39:04.571434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:58.924 [2024-11-20 12:39:04.571439] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:58.924 [2024-11-20 12:39:04.571441] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:58.924 [2024-11-20 12:39:04.571469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.571474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.571477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.924 [2024-11-20 12:39:04.571488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:58.924 [2024-11-20 12:39:04.571504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.924 [2024-11-20 12:39:04.578422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.924 [2024-11-20 12:39:04.578430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.924 [2024-11-20 12:39:04.578433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.924 [2024-11-20 12:39:04.578445] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:58.924 [2024-11-20 12:39:04.578451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:58.924 [2024-11-20 12:39:04.578455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:58.924 [2024-11-20 12:39:04.578467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.924 [2024-11-20 12:39:04.578483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.924 [2024-11-20 12:39:04.578495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.924 [2024-11-20 12:39:04.578646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.924 [2024-11-20 12:39:04.578651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.924 [2024-11-20 12:39:04.578654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.924 [2024-11-20 12:39:04.578661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:58.924 [2024-11-20 12:39:04.578667] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:58.924 [2024-11-20 12:39:04.578673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.924 [2024-11-20 12:39:04.578684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.924 [2024-11-20 12:39:04.578693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.924 [2024-11-20 12:39:04.578750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.924 [2024-11-20 12:39:04.578755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.924 [2024-11-20 12:39:04.578757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.924 [2024-11-20 12:39:04.578765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:58.924 [2024-11-20 12:39:04.578771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:58.924 [2024-11-20 12:39:04.578776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.924 [2024-11-20 12:39:04.578780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.578782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.925 [2024-11-20 12:39:04.578787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.925 [2024-11-20 12:39:04.578795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.925 [2024-11-20 12:39:04.578845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.925 [2024-11-20 12:39:04.578850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.925 [2024-11-20 12:39:04.578853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.578856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.925 [2024-11-20 12:39:04.578860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:58.925 [2024-11-20 12:39:04.578867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.578870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.578873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.925 [2024-11-20 12:39:04.578880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.925 [2024-11-20 12:39:04.578889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.925 [2024-11-20 12:39:04.578962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.925 [2024-11-20 12:39:04.578967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.925 [2024-11-20 12:39:04.578970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.578973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.925 [2024-11-20 12:39:04.578976] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:58.925 [2024-11-20 12:39:04.578981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:58.925 [2024-11-20 12:39:04.578986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:58.925 [2024-11-20 12:39:04.579094] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:58.925 [2024-11-20 12:39:04.579098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:58.925 [2024-11-20 12:39:04.579104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.925 [2024-11-20 12:39:04.579115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.925 [2024-11-20 12:39:04.579123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.925 [2024-11-20 12:39:04.579176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.925 [2024-11-20 12:39:04.579181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.925 [2024-11-20 12:39:04.579184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.925 [2024-11-20 12:39:04.579190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:58.925 [2024-11-20 12:39:04.579197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.925 [2024-11-20 12:39:04.579208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.925 [2024-11-20 12:39:04.579216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.925 [2024-11-20 12:39:04.579268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.925 [2024-11-20 12:39:04.579273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.925 [2024-11-20 12:39:04.579276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.925 [2024-11-20 12:39:04.579282] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:58.925 [2024-11-20 12:39:04.579286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:58.925 [2024-11-20 12:39:04.579294] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:58.925 [2024-11-20 12:39:04.579306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:58.925 [2024-11-20 12:39:04.579314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.925 [2024-11-20 12:39:04.579322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.925 [2024-11-20 12:39:04.579330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.925 [2024-11-20 12:39:04.579408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.925 [2024-11-20 12:39:04.579418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.925 [2024-11-20 12:39:04.579421] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579424] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3690): datao=0, datal=4096, cccid=0 00:24:58.925 [2024-11-20 12:39:04.579428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925100) on tqpair(0x18c3690): expected_datao=0, payload_size=4096 00:24:58.925 [2024-11-20 12:39:04.579432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579447] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.925 [2024-11-20 12:39:04.579485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.925 [2024-11-20 12:39:04.579488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.925 [2024-11-20 12:39:04.579497] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:58.925 [2024-11-20 12:39:04.579501] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:58.925 [2024-11-20 12:39:04.579505] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:58.925 [2024-11-20 12:39:04.579512] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:58.925 [2024-11-20 12:39:04.579516] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:58.925 [2024-11-20 12:39:04.579519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:58.925 [2024-11-20 12:39:04.579528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:58.925 [2024-11-20 12:39:04.579534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.925 [2024-11-20 12:39:04.579545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:58.925 [2024-11-20 12:39:04.579555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.925 [2024-11-20 12:39:04.579614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.925 [2024-11-20 12:39:04.579619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.925 [2024-11-20 12:39:04.579623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:58.925 [2024-11-20 12:39:04.579632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.925 [2024-11-20 12:39:04.579635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.579642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.926 [2024-11-20 12:39:04.579647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.579657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.926 [2024-11-20 12:39:04.579662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.579672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.926 [2024-11-20 12:39:04.579677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.579687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.926 [2024-11-20 12:39:04.579690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:58.926 [2024-11-20 12:39:04.579697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:58.926 [2024-11-20 12:39:04.579702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.579710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.926 [2024-11-20 12:39:04.579720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925100, cid 0, qid 0 00:24:58.926 [2024-11-20 12:39:04.579724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925280, cid 1, qid 0 00:24:58.926 [2024-11-20 12:39:04.579728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925400, cid 2, qid 0 00:24:58.926 [2024-11-20 12:39:04.579731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:58.926 [2024-11-20 12:39:04.579735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925700, cid 4, qid 0 00:24:58.926 [2024-11-20 12:39:04.579833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.926 [2024-11-20 12:39:04.579838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.926 [2024-11-20 12:39:04.579840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925700) on tqpair=0x18c3690 00:24:58.926 [2024-11-20 12:39:04.579850] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:58.926 [2024-11-20 12:39:04.579854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:58.926 [2024-11-20 12:39:04.579863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.579871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.926 [2024-11-20 12:39:04.579880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925700, cid 4, qid 0 00:24:58.926 [2024-11-20 12:39:04.579942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.926 [2024-11-20 12:39:04.579947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.926 [2024-11-20 12:39:04.579950] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579952] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3690): datao=0, datal=4096, cccid=4 00:24:58.926 [2024-11-20 12:39:04.579956] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925700) on tqpair(0x18c3690): expected_datao=0, payload_size=4096 00:24:58.926 [2024-11-20 12:39:04.579959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579968] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.579971] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.926 [2024-11-20 12:39:04.621557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.926 [2024-11-20 12:39:04.621560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925700) on tqpair=0x18c3690 00:24:58.926 [2024-11-20 12:39:04.621575] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:58.926 [2024-11-20 12:39:04.621596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.621607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.926 [2024-11-20 12:39:04.621612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.621624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.926 [2024-11-20 12:39:04.621638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925700, cid 4, qid 0 00:24:58.926 [2024-11-20 12:39:04.621642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925880, cid 5, qid 0 00:24:58.926 [2024-11-20 12:39:04.621759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.926 [2024-11-20 12:39:04.621764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.926 [2024-11-20 12:39:04.621767] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621770] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3690): datao=0, datal=1024, cccid=4 00:24:58.926 [2024-11-20 12:39:04.621773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925700) on tqpair(0x18c3690): expected_datao=0, payload_size=1024 00:24:58.926 [2024-11-20 12:39:04.621777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.926 [2024-11-20 12:39:04.621796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.926 [2024-11-20 12:39:04.621799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.621802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925880) on tqpair=0x18c3690 00:24:58.926 [2024-11-20 12:39:04.663544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.926 [2024-11-20 12:39:04.663554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.926 [2024-11-20 12:39:04.663557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.663560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925700) on tqpair=0x18c3690 00:24:58.926 [2024-11-20 12:39:04.663570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.926 [2024-11-20 12:39:04.663574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3690) 00:24:58.926 [2024-11-20 12:39:04.663580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.926 [2024-11-20 12:39:04.663593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925700, cid 4, qid 0 00:24:58.926 [2024-11-20 12:39:04.663706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.926 [2024-11-20 12:39:04.663711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.927 [2024-11-20 12:39:04.663713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663716] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3690): datao=0, datal=3072, cccid=4 00:24:58.927 [2024-11-20 12:39:04.663720] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925700) on tqpair(0x18c3690): expected_datao=0, payload_size=3072 00:24:58.927 [2024-11-20 12:39:04.663723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663729] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663732] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.927 [2024-11-20 12:39:04.663744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.927 [2024-11-20 12:39:04.663747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925700) on tqpair=0x18c3690 00:24:58.927 [2024-11-20 12:39:04.663757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18c3690) 00:24:58.927 [2024-11-20 12:39:04.663765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.927 [2024-11-20 12:39:04.663776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925700, cid 4, qid 0 00:24:58.927 [2024-11-20 12:39:04.663852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.927 [2024-11-20 12:39:04.663857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.927 [2024-11-20 12:39:04.663860] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663862] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18c3690): datao=0, datal=8, cccid=4 00:24:58.927 [2024-11-20 12:39:04.663866] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925700) on tqpair(0x18c3690): expected_datao=0, payload_size=8 00:24:58.927 [2024-11-20 12:39:04.663869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663874] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.927 [2024-11-20 12:39:04.663877] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.194 [2024-11-20 12:39:04.708420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.194 [2024-11-20 12:39:04.708433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.194 [2024-11-20 12:39:04.708439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.194 [2024-11-20 12:39:04.708442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925700) on tqpair=0x18c3690 00:24:59.194 ===================================================== 00:24:59.194 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:59.194 ===================================================== 00:24:59.194 Controller Capabilities/Features 00:24:59.194 ================================ 00:24:59.194 Vendor ID: 0000 00:24:59.194 Subsystem Vendor ID: 0000 00:24:59.194 Serial Number: .................... 00:24:59.194 Model Number: ........................................ 00:24:59.194 Firmware Version: 25.01 00:24:59.194 Recommended Arb Burst: 0 00:24:59.194 IEEE OUI Identifier: 00 00 00 00:24:59.194 Multi-path I/O 00:24:59.194 May have multiple subsystem ports: No 00:24:59.194 May have multiple controllers: No 00:24:59.194 Associated with SR-IOV VF: No 00:24:59.194 Max Data Transfer Size: 131072 00:24:59.194 Max Number of Namespaces: 0 00:24:59.194 Max Number of I/O Queues: 1024 00:24:59.194 NVMe Specification Version (VS): 1.3 00:24:59.194 NVMe Specification Version (Identify): 1.3 00:24:59.194 Maximum Queue Entries: 128 00:24:59.194 Contiguous Queues Required: Yes 00:24:59.194 Arbitration Mechanisms Supported 00:24:59.194 Weighted Round Robin: Not Supported 00:24:59.194 Vendor Specific: Not Supported 00:24:59.194 Reset Timeout: 15000 ms 00:24:59.194 Doorbell Stride: 4 bytes 00:24:59.194 NVM Subsystem Reset: Not Supported 00:24:59.194 Command Sets Supported 00:24:59.194 NVM Command Set: Supported 00:24:59.194 Boot Partition: Not Supported 00:24:59.194 Memory Page Size Minimum: 4096 bytes 00:24:59.194 Memory Page Size Maximum: 4096 bytes 00:24:59.194 Persistent Memory Region: Not Supported 00:24:59.194 Optional Asynchronous Events Supported 00:24:59.195 Namespace Attribute Notices: Not Supported 00:24:59.195 Firmware Activation Notices: Not Supported 00:24:59.195 ANA Change Notices: Not Supported 00:24:59.195 PLE Aggregate Log Change Notices: Not Supported 00:24:59.195 LBA Status Info Alert Notices: Not Supported 00:24:59.195 EGE Aggregate Log Change Notices: Not Supported 00:24:59.195 Normal NVM Subsystem Shutdown event: Not Supported 00:24:59.195 Zone Descriptor Change Notices: Not Supported 00:24:59.195 Discovery Log Change Notices: Supported 00:24:59.195 Controller Attributes 00:24:59.195 128-bit Host Identifier: Not Supported 00:24:59.195 Non-Operational Permissive Mode: Not Supported 00:24:59.195 NVM Sets: Not Supported 00:24:59.195 Read Recovery Levels: Not Supported 00:24:59.195 Endurance Groups: Not Supported 00:24:59.195 Predictable Latency Mode: Not Supported 00:24:59.195 Traffic Based Keep ALive: Not Supported 00:24:59.195 Namespace Granularity: Not Supported 00:24:59.195 SQ Associations: Not Supported 00:24:59.195 UUID List: Not Supported 00:24:59.195 Multi-Domain Subsystem: Not Supported 00:24:59.195 Fixed Capacity Management: Not Supported 00:24:59.195 Variable Capacity Management: Not Supported 00:24:59.195 Delete Endurance Group: Not Supported 00:24:59.195 Delete NVM Set: Not Supported 00:24:59.195 Extended LBA Formats Supported: Not Supported 00:24:59.195 Flexible Data Placement Supported: Not Supported 00:24:59.195 00:24:59.195 Controller Memory Buffer Support 00:24:59.195 ================================ 00:24:59.195 Supported: No 00:24:59.195 00:24:59.195 Persistent Memory Region Support 00:24:59.195 ================================ 00:24:59.195 Supported: No 00:24:59.195 00:24:59.195 Admin Command Set Attributes 00:24:59.195 ============================ 00:24:59.195 Security Send/Receive: Not Supported 00:24:59.195 Format NVM: Not Supported 00:24:59.195 Firmware Activate/Download: Not Supported 00:24:59.195 Namespace Management: Not Supported 00:24:59.195 Device Self-Test: Not Supported 00:24:59.195 Directives: Not Supported 00:24:59.195 NVMe-MI: Not Supported 00:24:59.195 Virtualization Management: Not Supported 00:24:59.195 Doorbell Buffer Config: Not Supported 00:24:59.195 Get LBA Status Capability: Not Supported 00:24:59.195 Command & Feature Lockdown Capability: Not Supported 00:24:59.195 Abort Command Limit: 1 00:24:59.195 Async Event Request Limit: 4 00:24:59.195 Number of Firmware Slots: N/A 00:24:59.195 Firmware Slot 1 Read-Only: N/A 00:24:59.195 Firmware Activation Without Reset: N/A 00:24:59.195 Multiple Update Detection Support: N/A 00:24:59.195 Firmware Update Granularity: No Information Provided 00:24:59.195 Per-Namespace SMART Log: No 00:24:59.195 Asymmetric Namespace Access Log Page: Not Supported 00:24:59.195 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:59.195 Command Effects Log Page: Not Supported 00:24:59.195 Get Log Page Extended Data: Supported 00:24:59.195 Telemetry Log Pages: Not Supported 00:24:59.195 Persistent Event Log Pages: Not Supported 00:24:59.195 Supported Log Pages Log Page: May Support 00:24:59.195 Commands Supported & Effects Log Page: Not Supported 00:24:59.195 Feature Identifiers & Effects Log Page:May Support 00:24:59.195 NVMe-MI Commands & Effects Log Page: May Support 00:24:59.195 Data Area 4 for Telemetry Log: Not Supported 00:24:59.195 Error Log Page Entries Supported: 128 00:24:59.195 Keep Alive: Not Supported 00:24:59.195 00:24:59.195 NVM Command Set Attributes 00:24:59.195 ========================== 00:24:59.195 Submission Queue Entry Size 00:24:59.195 Max: 1 00:24:59.195 Min: 1 00:24:59.195 Completion Queue Entry Size 00:24:59.195 Max: 1 00:24:59.195 Min: 1 00:24:59.195 Number of Namespaces: 0 00:24:59.195 Compare Command: Not Supported 00:24:59.195 Write Uncorrectable Command: Not Supported 00:24:59.195 Dataset Management Command: Not Supported 00:24:59.195 Write Zeroes Command: Not Supported 00:24:59.195 Set Features Save Field: Not Supported 00:24:59.195 Reservations: Not Supported 00:24:59.195 Timestamp: Not Supported 00:24:59.195 Copy: Not Supported 00:24:59.195 Volatile Write Cache: Not Present 00:24:59.195 Atomic Write Unit (Normal): 1 00:24:59.195 Atomic Write Unit (PFail): 1 00:24:59.195 Atomic Compare & Write Unit: 1 00:24:59.195 Fused Compare & Write: Supported 00:24:59.195 Scatter-Gather List 00:24:59.195 SGL Command Set: Supported 00:24:59.195 SGL Keyed: Supported 00:24:59.195 SGL Bit Bucket Descriptor: Not Supported 00:24:59.195 SGL Metadata Pointer: Not Supported 00:24:59.195 Oversized SGL: Not Supported 00:24:59.195 SGL Metadata Address: Not Supported 00:24:59.195 SGL Offset: Supported 00:24:59.195 Transport SGL Data Block: Not Supported 00:24:59.195 Replay Protected Memory Block: Not Supported 00:24:59.195 00:24:59.195 Firmware Slot Information 00:24:59.195 ========================= 00:24:59.195 Active slot: 0 00:24:59.195 00:24:59.195 00:24:59.195 Error Log 00:24:59.195 ========= 00:24:59.195 00:24:59.195 Active Namespaces 00:24:59.195 ================= 00:24:59.195 Discovery Log Page 00:24:59.195 ================== 00:24:59.195 Generation Counter: 2 00:24:59.195 Number of Records: 2 00:24:59.195 Record Format: 0 00:24:59.195 00:24:59.195 Discovery Log Entry 0 00:24:59.195 ---------------------- 00:24:59.195 Transport Type: 3 (TCP) 00:24:59.195 Address Family: 1 (IPv4) 00:24:59.195 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:59.195 Entry Flags: 00:24:59.195 Duplicate Returned Information: 1 00:24:59.195 Explicit Persistent Connection Support for Discovery: 1 00:24:59.195 Transport Requirements: 00:24:59.195 Secure Channel: Not Required 00:24:59.195 Port ID: 0 (0x0000) 00:24:59.195 Controller ID: 65535 (0xffff) 00:24:59.195 Admin Max SQ Size: 128 00:24:59.195 Transport Service Identifier: 4420 00:24:59.195 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:59.195 Transport Address: 10.0.0.2 00:24:59.195 Discovery Log Entry 1 00:24:59.195 ---------------------- 00:24:59.195 Transport Type: 3 (TCP) 00:24:59.195 Address Family: 1 (IPv4) 00:24:59.195 Subsystem Type: 2 (NVM Subsystem) 00:24:59.195 Entry Flags: 00:24:59.195 Duplicate Returned Information: 0 00:24:59.195 Explicit Persistent Connection Support for Discovery: 0 00:24:59.195 Transport Requirements: 00:24:59.195 Secure Channel: Not Required 00:24:59.195 Port ID: 0 (0x0000) 00:24:59.195 Controller ID: 65535 (0xffff) 00:24:59.195 Admin Max SQ Size: 128 00:24:59.195 Transport Service Identifier: 4420 00:24:59.195 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:59.195 Transport Address: 10.0.0.2 [2024-11-20 12:39:04.708517] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:59.195 [2024-11-20 12:39:04.708526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925100) on tqpair=0x18c3690 00:24:59.195 [2024-11-20 12:39:04.708531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-11-20 12:39:04.708536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925280) on tqpair=0x18c3690 00:24:59.195 [2024-11-20 12:39:04.708539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-11-20 12:39:04.708543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925400) on tqpair=0x18c3690 00:24:59.195 [2024-11-20 12:39:04.708547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-11-20 12:39:04.708550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.195 [2024-11-20 12:39:04.708554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-11-20 12:39:04.708563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.195 [2024-11-20 12:39:04.708566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.195 [2024-11-20 12:39:04.708569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.195 [2024-11-20 12:39:04.708575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.195 [2024-11-20 12:39:04.708587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.195 [2024-11-20 12:39:04.708703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.195 [2024-11-20 12:39:04.708708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.195 [2024-11-20 12:39:04.708711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.195 [2024-11-20 12:39:04.708714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.195 [2024-11-20 12:39:04.708719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.195 [2024-11-20 12:39:04.708722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.195 [2024-11-20 12:39:04.708725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.195 [2024-11-20 12:39:04.708730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.195 [2024-11-20 12:39:04.708741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.708815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.708820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.708823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.708826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.708830] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:59.196 [2024-11-20 12:39:04.708833] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:59.196 [2024-11-20 12:39:04.708841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.708844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.708846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.708853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.708862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.708951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.708956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.708959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.708961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.708969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.708972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.708975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.708980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.708988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.196 [2024-11-20 12:39:04.709819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.196 [2024-11-20 12:39:04.709827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.196 [2024-11-20 12:39:04.709880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.196 [2024-11-20 12:39:04.709885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.196 [2024-11-20 12:39:04.709888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.196 [2024-11-20 12:39:04.709898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.196 [2024-11-20 12:39:04.709901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.709903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.709909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.709916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.709970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.709975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.709978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.709981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.709988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.709992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.709994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.709999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.197 [2024-11-20 12:39:04.710886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.197 [2024-11-20 12:39:04.710891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.197 [2024-11-20 12:39:04.710897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.197 [2024-11-20 12:39:04.710904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.197 [2024-11-20 12:39:04.710963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.197 [2024-11-20 12:39:04.710968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.197 [2024-11-20 12:39:04.710970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.710973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.710980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.710983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.710986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.710991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.710999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.711959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.711964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.198 [2024-11-20 12:39:04.711967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.198 [2024-11-20 12:39:04.711977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.198 [2024-11-20 12:39:04.711983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.198 [2024-11-20 12:39:04.711988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.198 [2024-11-20 12:39:04.711996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.198 [2024-11-20 12:39:04.712054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.198 [2024-11-20 12:39:04.712059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.712062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.199 [2024-11-20 12:39:04.712072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.199 [2024-11-20 12:39:04.712083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.199 [2024-11-20 12:39:04.712090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.199 [2024-11-20 12:39:04.712140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.712145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.712148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.199 [2024-11-20 12:39:04.712158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.199 [2024-11-20 12:39:04.712169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.199 [2024-11-20 12:39:04.712177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.199 [2024-11-20 12:39:04.712233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.712238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.712240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.199 [2024-11-20 12:39:04.712250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.199 [2024-11-20 12:39:04.712263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.199 [2024-11-20 12:39:04.712271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.199 [2024-11-20 12:39:04.712326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.712331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.712333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.199 [2024-11-20 12:39:04.712343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.712349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.199 [2024-11-20 12:39:04.712354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.199 [2024-11-20 12:39:04.712362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.199 [2024-11-20 12:39:04.716418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.716425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.716428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.716430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.199 [2024-11-20 12:39:04.716438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.716441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.716444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18c3690) 00:24:59.199 [2024-11-20 12:39:04.716449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.199 [2024-11-20 12:39:04.716458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925580, cid 3, qid 0 00:24:59.199 [2024-11-20 12:39:04.716527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.716533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.716535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.716539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1925580) on tqpair=0x18c3690 00:24:59.199 [2024-11-20 12:39:04.716544] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:59.199 00:24:59.199 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:59.199 [2024-11-20 12:39:04.754765] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:24:59.199 [2024-11-20 12:39:04.754811] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019273 ] 00:24:59.199 [2024-11-20 12:39:04.792381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:59.199 [2024-11-20 12:39:04.792423] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:59.199 [2024-11-20 12:39:04.792430] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:59.199 [2024-11-20 12:39:04.792441] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:59.199 [2024-11-20 12:39:04.792449] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:59.199 [2024-11-20 12:39:04.796608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:59.199 [2024-11-20 12:39:04.796635] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a50690 0 00:24:59.199 [2024-11-20 12:39:04.803421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:59.199 [2024-11-20 12:39:04.803434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:59.199 [2024-11-20 12:39:04.803438] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:59.199 [2024-11-20 12:39:04.803441] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:59.199 [2024-11-20 12:39:04.803467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.803471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.803474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.199 [2024-11-20 12:39:04.803483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:59.199 [2024-11-20 12:39:04.803499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.199 [2024-11-20 12:39:04.811421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.811429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.811432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.811435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.199 [2024-11-20 12:39:04.811444] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:59.199 [2024-11-20 12:39:04.811450] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:59.199 [2024-11-20 12:39:04.811454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:59.199 [2024-11-20 12:39:04.811464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.811467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.811470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.199 [2024-11-20 12:39:04.811477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.199 [2024-11-20 12:39:04.811487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.199 [2024-11-20 12:39:04.811642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.199 [2024-11-20 12:39:04.811647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.199 [2024-11-20 12:39:04.811650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.199 [2024-11-20 12:39:04.811653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.199 [2024-11-20 12:39:04.811658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:59.199 [2024-11-20 12:39:04.811664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:59.200 [2024-11-20 12:39:04.811670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.811684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.200 [2024-11-20 12:39:04.811693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.811748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.811753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.811756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.811763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:59.200 [2024-11-20 12:39:04.811769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:59.200 [2024-11-20 12:39:04.811775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.811786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.200 [2024-11-20 12:39:04.811795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.811850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.811855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.811858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.811865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:59.200 [2024-11-20 12:39:04.811872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.811883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.200 [2024-11-20 12:39:04.811892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.811948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.811953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.811956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.811959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.811963] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:59.200 [2024-11-20 12:39:04.811966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:59.200 [2024-11-20 12:39:04.811972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:59.200 [2024-11-20 12:39:04.812079] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:59.200 [2024-11-20 12:39:04.812083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:59.200 [2024-11-20 12:39:04.812089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.812102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.200 [2024-11-20 12:39:04.812110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.812163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.812168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.812171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.812178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:59.200 [2024-11-20 12:39:04.812185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.812196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.200 [2024-11-20 12:39:04.812204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.812257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.812262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.812264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.812271] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:59.200 [2024-11-20 12:39:04.812275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:59.200 [2024-11-20 12:39:04.812281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:59.200 [2024-11-20 12:39:04.812292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:59.200 [2024-11-20 12:39:04.812299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.812307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.200 [2024-11-20 12:39:04.812316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.812397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.200 [2024-11-20 12:39:04.812402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.200 [2024-11-20 12:39:04.812405] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812408] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=4096, cccid=0 00:24:59.200 [2024-11-20 12:39:04.812416] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2100) on tqpair(0x1a50690): expected_datao=0, payload_size=4096 00:24:59.200 [2024-11-20 12:39:04.812419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812430] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.812436] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.853554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.853557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.853567] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:59.200 [2024-11-20 12:39:04.853571] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:59.200 [2024-11-20 12:39:04.853575] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:59.200 [2024-11-20 12:39:04.853582] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:59.200 [2024-11-20 12:39:04.853586] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:59.200 [2024-11-20 12:39:04.853590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:59.200 [2024-11-20 12:39:04.853599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:59.200 [2024-11-20 12:39:04.853605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.853618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:59.200 [2024-11-20 12:39:04.853629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.200 [2024-11-20 12:39:04.853684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.200 [2024-11-20 12:39:04.853690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.200 [2024-11-20 12:39:04.853693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.200 [2024-11-20 12:39:04.853701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.853712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.200 [2024-11-20 12:39:04.853717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.200 [2024-11-20 12:39:04.853723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a50690) 00:24:59.200 [2024-11-20 12:39:04.853727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.201 [2024-11-20 12:39:04.853732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.853742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.201 [2024-11-20 12:39:04.853747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.853759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.201 [2024-11-20 12:39:04.853763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.853770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.853775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.853783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.201 [2024-11-20 12:39:04.853793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2100, cid 0, qid 0 00:24:59.201 [2024-11-20 12:39:04.853798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2280, cid 1, qid 0 00:24:59.201 [2024-11-20 12:39:04.853801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2400, cid 2, qid 0 00:24:59.201 [2024-11-20 12:39:04.853805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.201 [2024-11-20 12:39:04.853809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.201 [2024-11-20 12:39:04.853890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.201 [2024-11-20 12:39:04.853896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.201 [2024-11-20 12:39:04.853899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.201 [2024-11-20 12:39:04.853907] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:59.201 [2024-11-20 12:39:04.853912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.853919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.853925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.853930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.853936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.853942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:59.201 [2024-11-20 12:39:04.853950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.201 [2024-11-20 12:39:04.854000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.201 [2024-11-20 12:39:04.854006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.201 [2024-11-20 12:39:04.854008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.201 [2024-11-20 12:39:04.854059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.854084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.201 [2024-11-20 12:39:04.854093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.201 [2024-11-20 12:39:04.854157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.201 [2024-11-20 12:39:04.854162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.201 [2024-11-20 12:39:04.854165] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854168] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=4096, cccid=4 00:24:59.201 [2024-11-20 12:39:04.854171] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2700) on tqpair(0x1a50690): expected_datao=0, payload_size=4096 00:24:59.201 [2024-11-20 12:39:04.854175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854184] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854188] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.201 [2024-11-20 12:39:04.854226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.201 [2024-11-20 12:39:04.854229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.201 [2024-11-20 12:39:04.854240] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:59.201 [2024-11-20 12:39:04.854247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.854269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.201 [2024-11-20 12:39:04.854277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.201 [2024-11-20 12:39:04.854346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.201 [2024-11-20 12:39:04.854351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.201 [2024-11-20 12:39:04.854354] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854357] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=4096, cccid=4 00:24:59.201 [2024-11-20 12:39:04.854361] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2700) on tqpair(0x1a50690): expected_datao=0, payload_size=4096 00:24:59.201 [2024-11-20 12:39:04.854364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854369] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854372] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.201 [2024-11-20 12:39:04.854387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.201 [2024-11-20 12:39:04.854390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.201 [2024-11-20 12:39:04.854405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.201 [2024-11-20 12:39:04.854434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.201 [2024-11-20 12:39:04.854443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.201 [2024-11-20 12:39:04.854508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.201 [2024-11-20 12:39:04.854514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.201 [2024-11-20 12:39:04.854516] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854519] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=4096, cccid=4 00:24:59.201 [2024-11-20 12:39:04.854523] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2700) on tqpair(0x1a50690): expected_datao=0, payload_size=4096 00:24:59.201 [2024-11-20 12:39:04.854526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854531] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854534] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.201 [2024-11-20 12:39:04.854551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.201 [2024-11-20 12:39:04.854554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.201 [2024-11-20 12:39:04.854557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.201 [2024-11-20 12:39:04.854562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:59.201 [2024-11-20 12:39:04.854595] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:59.202 [2024-11-20 12:39:04.854599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:59.202 [2024-11-20 12:39:04.854603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:59.202 [2024-11-20 12:39:04.854615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.854624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.854630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.854641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.202 [2024-11-20 12:39:04.854651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.202 [2024-11-20 12:39:04.854656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2880, cid 5, qid 0 00:24:59.202 [2024-11-20 12:39:04.854727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.854732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.854735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.854743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.854747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.854750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2880) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.854760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.854769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.854777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2880, cid 5, qid 0 00:24:59.202 [2024-11-20 12:39:04.854833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.854839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.854842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2880) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.854852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.854861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.854869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2880, cid 5, qid 0 00:24:59.202 [2024-11-20 12:39:04.854920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.854925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.854928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2880) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.854938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.854941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.854947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.854954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2880, cid 5, qid 0 00:24:59.202 [2024-11-20 12:39:04.855006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.855011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.855014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2880) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.855032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.855041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.855046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.855054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.855060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.855068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.855073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a50690) 00:24:59.202 [2024-11-20 12:39:04.855081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.202 [2024-11-20 12:39:04.855091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2880, cid 5, qid 0 00:24:59.202 [2024-11-20 12:39:04.855095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2700, cid 4, qid 0 00:24:59.202 [2024-11-20 12:39:04.855099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2a00, cid 6, qid 0 00:24:59.202 [2024-11-20 12:39:04.855103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2b80, cid 7, qid 0 00:24:59.202 [2024-11-20 12:39:04.855232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.202 [2024-11-20 12:39:04.855238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.202 [2024-11-20 12:39:04.855241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=8192, cccid=5 00:24:59.202 [2024-11-20 12:39:04.855247] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2880) on tqpair(0x1a50690): expected_datao=0, payload_size=8192 00:24:59.202 [2024-11-20 12:39:04.855251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855265] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.202 [2024-11-20 12:39:04.855274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.202 [2024-11-20 12:39:04.855277] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855280] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=512, cccid=4 00:24:59.202 [2024-11-20 12:39:04.855284] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2700) on tqpair(0x1a50690): expected_datao=0, payload_size=512 00:24:59.202 [2024-11-20 12:39:04.855287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855292] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855295] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.202 [2024-11-20 12:39:04.855305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.202 [2024-11-20 12:39:04.855309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855311] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=512, cccid=6 00:24:59.202 [2024-11-20 12:39:04.855315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2a00) on tqpair(0x1a50690): expected_datao=0, payload_size=512 00:24:59.202 [2024-11-20 12:39:04.855318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855323] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855326] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:59.202 [2024-11-20 12:39:04.855335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:59.202 [2024-11-20 12:39:04.855337] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855340] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a50690): datao=0, datal=4096, cccid=7 00:24:59.202 [2024-11-20 12:39:04.855344] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab2b80) on tqpair(0x1a50690): expected_datao=0, payload_size=4096 00:24:59.202 [2024-11-20 12:39:04.855347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855352] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855356] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.855366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.855369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2880) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.855381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.855386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.855388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2700) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.855399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.855404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.855406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.202 [2024-11-20 12:39:04.855409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2a00) on tqpair=0x1a50690 00:24:59.202 [2024-11-20 12:39:04.859423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.202 [2024-11-20 12:39:04.859428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.202 [2024-11-20 12:39:04.859431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.203 [2024-11-20 12:39:04.859434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2b80) on tqpair=0x1a50690 00:24:59.203 ===================================================== 00:24:59.203 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.203 ===================================================== 00:24:59.203 Controller Capabilities/Features 00:24:59.203 ================================ 00:24:59.203 Vendor ID: 8086 00:24:59.203 Subsystem Vendor ID: 8086 00:24:59.203 Serial Number: SPDK00000000000001 00:24:59.203 Model Number: SPDK bdev Controller 00:24:59.203 Firmware Version: 25.01 00:24:59.203 Recommended Arb Burst: 6 00:24:59.203 IEEE OUI Identifier: e4 d2 5c 00:24:59.203 Multi-path I/O 00:24:59.203 May have multiple subsystem ports: Yes 00:24:59.203 May have multiple controllers: Yes 00:24:59.203 Associated with SR-IOV VF: No 00:24:59.203 Max Data Transfer Size: 131072 00:24:59.203 Max Number of Namespaces: 32 00:24:59.203 Max Number of I/O Queues: 127 00:24:59.203 NVMe Specification Version (VS): 1.3 00:24:59.203 NVMe Specification Version (Identify): 1.3 00:24:59.203 Maximum Queue Entries: 128 00:24:59.203 Contiguous Queues Required: Yes 00:24:59.203 Arbitration Mechanisms Supported 00:24:59.203 Weighted Round Robin: Not Supported 00:24:59.203 Vendor Specific: Not Supported 00:24:59.203 Reset Timeout: 15000 ms 00:24:59.203 Doorbell Stride: 4 bytes 00:24:59.203 NVM Subsystem Reset: Not Supported 00:24:59.203 Command Sets Supported 00:24:59.203 NVM Command Set: Supported 00:24:59.203 Boot Partition: Not Supported 00:24:59.203 Memory Page Size Minimum: 4096 bytes 00:24:59.203 Memory Page Size Maximum: 4096 bytes 00:24:59.203 Persistent Memory Region: Not Supported 00:24:59.203 Optional Asynchronous Events Supported 00:24:59.203 Namespace Attribute Notices: Supported 00:24:59.203 Firmware Activation Notices: Not Supported 00:24:59.203 ANA Change Notices: Not Supported 00:24:59.203 PLE Aggregate Log Change Notices: Not Supported 00:24:59.203 LBA Status Info Alert Notices: Not Supported 00:24:59.203 EGE Aggregate Log Change Notices: Not Supported 00:24:59.203 Normal NVM Subsystem Shutdown event: Not Supported 00:24:59.203 Zone Descriptor Change Notices: Not Supported 00:24:59.203 Discovery Log Change Notices: Not Supported 00:24:59.203 Controller Attributes 00:24:59.203 128-bit Host Identifier: Supported 00:24:59.203 Non-Operational Permissive Mode: Not Supported 00:24:59.203 NVM Sets: Not Supported 00:24:59.203 Read Recovery Levels: Not Supported 00:24:59.203 Endurance Groups: Not Supported 00:24:59.203 Predictable Latency Mode: Not Supported 00:24:59.203 Traffic Based Keep ALive: Not Supported 00:24:59.203 Namespace Granularity: Not Supported 00:24:59.203 SQ Associations: Not Supported 00:24:59.203 UUID List: Not Supported 00:24:59.203 Multi-Domain Subsystem: Not Supported 00:24:59.203 Fixed Capacity Management: Not Supported 00:24:59.203 Variable Capacity Management: Not Supported 00:24:59.203 Delete Endurance Group: Not Supported 00:24:59.203 Delete NVM Set: Not Supported 00:24:59.203 Extended LBA Formats Supported: Not Supported 00:24:59.203 Flexible Data Placement Supported: Not Supported 00:24:59.203 00:24:59.203 Controller Memory Buffer Support 00:24:59.203 ================================ 00:24:59.203 Supported: No 00:24:59.203 00:24:59.203 Persistent Memory Region Support 00:24:59.203 ================================ 00:24:59.203 Supported: No 00:24:59.203 00:24:59.203 Admin Command Set Attributes 00:24:59.203 ============================ 00:24:59.203 Security Send/Receive: Not Supported 00:24:59.203 Format NVM: Not Supported 00:24:59.203 Firmware Activate/Download: Not Supported 00:24:59.203 Namespace Management: Not Supported 00:24:59.203 Device Self-Test: Not Supported 00:24:59.203 Directives: Not Supported 00:24:59.203 NVMe-MI: Not Supported 00:24:59.203 Virtualization Management: Not Supported 00:24:59.203 Doorbell Buffer Config: Not Supported 00:24:59.203 Get LBA Status Capability: Not Supported 00:24:59.203 Command & Feature Lockdown Capability: Not Supported 00:24:59.203 Abort Command Limit: 4 00:24:59.203 Async Event Request Limit: 4 00:24:59.203 Number of Firmware Slots: N/A 00:24:59.203 Firmware Slot 1 Read-Only: N/A 00:24:59.203 Firmware Activation Without Reset: N/A 00:24:59.203 Multiple Update Detection Support: N/A 00:24:59.203 Firmware Update Granularity: No Information Provided 00:24:59.203 Per-Namespace SMART Log: No 00:24:59.203 Asymmetric Namespace Access Log Page: Not Supported 00:24:59.203 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:59.203 Command Effects Log Page: Supported 00:24:59.203 Get Log Page Extended Data: Supported 00:24:59.203 Telemetry Log Pages: Not Supported 00:24:59.203 Persistent Event Log Pages: Not Supported 00:24:59.203 Supported Log Pages Log Page: May Support 00:24:59.203 Commands Supported & Effects Log Page: Not Supported 00:24:59.203 Feature Identifiers & Effects Log Page:May Support 00:24:59.203 NVMe-MI Commands & Effects Log Page: May Support 00:24:59.203 Data Area 4 for Telemetry Log: Not Supported 00:24:59.203 Error Log Page Entries Supported: 128 00:24:59.203 Keep Alive: Supported 00:24:59.203 Keep Alive Granularity: 10000 ms 00:24:59.203 00:24:59.203 NVM Command Set Attributes 00:24:59.203 ========================== 00:24:59.203 Submission Queue Entry Size 00:24:59.203 Max: 64 00:24:59.203 Min: 64 00:24:59.203 Completion Queue Entry Size 00:24:59.203 Max: 16 00:24:59.203 Min: 16 00:24:59.203 Number of Namespaces: 32 00:24:59.203 Compare Command: Supported 00:24:59.203 Write Uncorrectable Command: Not Supported 00:24:59.203 Dataset Management Command: Supported 00:24:59.203 Write Zeroes Command: Supported 00:24:59.203 Set Features Save Field: Not Supported 00:24:59.203 Reservations: Supported 00:24:59.203 Timestamp: Not Supported 00:24:59.203 Copy: Supported 00:24:59.203 Volatile Write Cache: Present 00:24:59.203 Atomic Write Unit (Normal): 1 00:24:59.203 Atomic Write Unit (PFail): 1 00:24:59.203 Atomic Compare & Write Unit: 1 00:24:59.203 Fused Compare & Write: Supported 00:24:59.203 Scatter-Gather List 00:24:59.203 SGL Command Set: Supported 00:24:59.203 SGL Keyed: Supported 00:24:59.203 SGL Bit Bucket Descriptor: Not Supported 00:24:59.203 SGL Metadata Pointer: Not Supported 00:24:59.203 Oversized SGL: Not Supported 00:24:59.203 SGL Metadata Address: Not Supported 00:24:59.203 SGL Offset: Supported 00:24:59.203 Transport SGL Data Block: Not Supported 00:24:59.203 Replay Protected Memory Block: Not Supported 00:24:59.203 00:24:59.203 Firmware Slot Information 00:24:59.203 ========================= 00:24:59.203 Active slot: 1 00:24:59.203 Slot 1 Firmware Revision: 25.01 00:24:59.203 00:24:59.203 00:24:59.203 Commands Supported and Effects 00:24:59.203 ============================== 00:24:59.203 Admin Commands 00:24:59.203 -------------- 00:24:59.203 Get Log Page (02h): Supported 00:24:59.203 Identify (06h): Supported 00:24:59.203 Abort (08h): Supported 00:24:59.203 Set Features (09h): Supported 00:24:59.203 Get Features (0Ah): Supported 00:24:59.203 Asynchronous Event Request (0Ch): Supported 00:24:59.203 Keep Alive (18h): Supported 00:24:59.203 I/O Commands 00:24:59.203 ------------ 00:24:59.203 Flush (00h): Supported LBA-Change 00:24:59.203 Write (01h): Supported LBA-Change 00:24:59.203 Read (02h): Supported 00:24:59.203 Compare (05h): Supported 00:24:59.203 Write Zeroes (08h): Supported LBA-Change 00:24:59.203 Dataset Management (09h): Supported LBA-Change 00:24:59.203 Copy (19h): Supported LBA-Change 00:24:59.203 00:24:59.203 Error Log 00:24:59.203 ========= 00:24:59.203 00:24:59.203 Arbitration 00:24:59.203 =========== 00:24:59.203 Arbitration Burst: 1 00:24:59.203 00:24:59.203 Power Management 00:24:59.203 ================ 00:24:59.203 Number of Power States: 1 00:24:59.203 Current Power State: Power State #0 00:24:59.203 Power State #0: 00:24:59.203 Max Power: 0.00 W 00:24:59.203 Non-Operational State: Operational 00:24:59.203 Entry Latency: Not Reported 00:24:59.203 Exit Latency: Not Reported 00:24:59.203 Relative Read Throughput: 0 00:24:59.203 Relative Read Latency: 0 00:24:59.203 Relative Write Throughput: 0 00:24:59.204 Relative Write Latency: 0 00:24:59.204 Idle Power: Not Reported 00:24:59.204 Active Power: Not Reported 00:24:59.204 Non-Operational Permissive Mode: Not Supported 00:24:59.204 00:24:59.204 Health Information 00:24:59.204 ================== 00:24:59.204 Critical Warnings: 00:24:59.204 Available Spare Space: OK 00:24:59.204 Temperature: OK 00:24:59.204 Device Reliability: OK 00:24:59.204 Read Only: No 00:24:59.204 Volatile Memory Backup: OK 00:24:59.204 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:59.204 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:59.204 Available Spare: 0% 00:24:59.204 Available Spare Threshold: 0% 00:24:59.204 Life Percentage Used:[2024-11-20 12:39:04.859507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.859518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.859530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2b80, cid 7, qid 0 00:24:59.204 [2024-11-20 12:39:04.859691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.859696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.859699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2b80) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859729] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:59.204 [2024-11-20 12:39:04.859737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2100) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.204 [2024-11-20 12:39:04.859748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2280) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.204 [2024-11-20 12:39:04.859757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2400) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.204 [2024-11-20 12:39:04.859766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.204 [2024-11-20 12:39:04.859776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.859788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.859798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.859857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.859862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.859865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.859885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.859896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.859962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.859968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.859970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.859977] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:59.204 [2024-11-20 12:39:04.859981] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:59.204 [2024-11-20 12:39:04.859989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.859995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.860000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.860010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.860059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.860065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.860068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.860079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.860090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.860098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.860152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.860158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.860163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.860175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.860187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.860195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.860244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.860249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.860252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.860263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.860274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.860282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.860333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.860338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.860341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.860351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.860363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.860373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.860436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.860442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.860444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.204 [2024-11-20 12:39:04.860455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.204 [2024-11-20 12:39:04.860467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.204 [2024-11-20 12:39:04.860476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.204 [2024-11-20 12:39:04.860530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.204 [2024-11-20 12:39:04.860535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.204 [2024-11-20 12:39:04.860537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.204 [2024-11-20 12:39:04.860540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.860547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.860559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.860567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.860618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.860623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.860626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.860636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.860647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.860655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.860710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.860715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.860718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.860728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.860739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.860747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.860801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.860807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.860810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.860819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.860831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.860839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.860890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.860896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.860898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.860908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.860920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.860928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.860981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.860986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.860989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.860992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.205 [2024-11-20 12:39:04.861548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.205 [2024-11-20 12:39:04.861554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.205 [2024-11-20 12:39:04.861559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.205 [2024-11-20 12:39:04.861567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.205 [2024-11-20 12:39:04.861621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.205 [2024-11-20 12:39:04.861626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.205 [2024-11-20 12:39:04.861629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.861639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.861651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.861659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.861712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.861717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.861720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.861731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.861742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.861750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.861803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.861809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.861811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.861822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.861833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.861841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.861891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.861897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.861899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.861911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.861923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.861931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.861986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.861992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.861994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.861998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.206 [2024-11-20 12:39:04.862652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.206 [2024-11-20 12:39:04.862660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.206 [2024-11-20 12:39:04.862665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.206 [2024-11-20 12:39:04.862673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.206 [2024-11-20 12:39:04.862725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.206 [2024-11-20 12:39:04.862730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.206 [2024-11-20 12:39:04.862732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.862742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.862754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.862762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.862815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.862820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.862823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.862833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.862844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.862853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.862905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.862910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.862913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.862923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.862929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.862935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.862943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.863001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.863006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.863008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.863019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.863031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.863040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.863093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.863098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.863101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.863111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.863122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.863130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.863181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.863186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.863189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.863200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.863211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.863219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.863274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.863280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.863283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.863293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.863304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.863312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.863372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.863377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.863379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.863390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.863396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a50690) 00:24:59.207 [2024-11-20 12:39:04.863402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.207 [2024-11-20 12:39:04.867415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab2580, cid 3, qid 0 00:24:59.207 [2024-11-20 12:39:04.867425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:59.207 [2024-11-20 12:39:04.867430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:59.207 [2024-11-20 12:39:04.867433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:59.207 [2024-11-20 12:39:04.867436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab2580) on tqpair=0x1a50690 00:24:59.207 [2024-11-20 12:39:04.867443] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:59.207 0% 00:24:59.207 Data Units Read: 0 00:24:59.207 Data Units Written: 0 00:24:59.207 Host Read Commands: 0 00:24:59.207 Host Write Commands: 0 00:24:59.207 Controller Busy Time: 0 minutes 00:24:59.207 Power Cycles: 0 00:24:59.207 Power On Hours: 0 hours 00:24:59.207 Unsafe Shutdowns: 0 00:24:59.207 Unrecoverable Media Errors: 0 00:24:59.207 Lifetime Error Log Entries: 0 00:24:59.207 Warning Temperature Time: 0 minutes 00:24:59.207 Critical Temperature Time: 0 minutes 00:24:59.207 00:24:59.207 Number of Queues 00:24:59.207 ================ 00:24:59.207 Number of I/O Submission Queues: 127 00:24:59.207 Number of I/O Completion Queues: 127 00:24:59.207 00:24:59.207 Active Namespaces 00:24:59.207 ================= 00:24:59.207 Namespace ID:1 00:24:59.207 Error Recovery Timeout: Unlimited 00:24:59.207 Command Set Identifier: NVM (00h) 00:24:59.207 Deallocate: Supported 00:24:59.207 Deallocated/Unwritten Error: Not Supported 00:24:59.207 Deallocated Read Value: Unknown 00:24:59.207 Deallocate in Write Zeroes: Not Supported 00:24:59.207 Deallocated Guard Field: 0xFFFF 00:24:59.207 Flush: Supported 00:24:59.207 Reservation: Supported 00:24:59.207 Namespace Sharing Capabilities: Multiple Controllers 00:24:59.207 Size (in LBAs): 131072 (0GiB) 00:24:59.207 Capacity (in LBAs): 131072 (0GiB) 00:24:59.207 Utilization (in LBAs): 131072 (0GiB) 00:24:59.207 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:59.207 EUI64: ABCDEF0123456789 00:24:59.207 UUID: 7e5f8246-3ddd-47ec-b143-3b5a3f7491dd 00:24:59.207 Thin Provisioning: Not Supported 00:24:59.207 Per-NS Atomic Units: Yes 00:24:59.207 Atomic Boundary Size (Normal): 0 00:24:59.207 Atomic Boundary Size (PFail): 0 00:24:59.207 Atomic Boundary Offset: 0 00:24:59.207 Maximum Single Source Range Length: 65535 00:24:59.207 Maximum Copy Length: 65535 00:24:59.207 Maximum Source Range Count: 1 00:24:59.207 NGUID/EUI64 Never Reused: No 00:24:59.207 Namespace Write Protected: No 00:24:59.207 Number of LBA Formats: 1 00:24:59.207 Current LBA Format: LBA Format #00 00:24:59.207 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:59.207 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:59.207 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:59.208 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.208 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:59.208 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.208 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:59.208 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.208 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.208 rmmod nvme_tcp 00:24:59.208 rmmod nvme_fabrics 00:24:59.208 rmmod nvme_keyring 00:24:59.467 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.467 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:59.467 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:59.467 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1019114 ']' 00:24:59.468 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1019114 00:24:59.468 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1019114 ']' 00:24:59.468 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1019114 00:24:59.468 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:59.468 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.468 12:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019114 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019114' 00:24:59.468 killing process with pid 1019114 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1019114 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1019114 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.468 12:39:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.006 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.006 00:25:02.006 real 0m10.123s 00:25:02.007 user 0m7.952s 00:25:02.007 sys 0m5.076s 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.007 ************************************ 00:25:02.007 END TEST nvmf_identify 00:25:02.007 ************************************ 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.007 ************************************ 00:25:02.007 START TEST nvmf_perf 00:25:02.007 ************************************ 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:02.007 * Looking for test storage... 00:25:02.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.007 --rc genhtml_branch_coverage=1 00:25:02.007 --rc genhtml_function_coverage=1 00:25:02.007 --rc genhtml_legend=1 00:25:02.007 --rc geninfo_all_blocks=1 00:25:02.007 --rc geninfo_unexecuted_blocks=1 00:25:02.007 00:25:02.007 ' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.007 --rc genhtml_branch_coverage=1 00:25:02.007 --rc genhtml_function_coverage=1 00:25:02.007 --rc genhtml_legend=1 00:25:02.007 --rc geninfo_all_blocks=1 00:25:02.007 --rc geninfo_unexecuted_blocks=1 00:25:02.007 00:25:02.007 ' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.007 --rc genhtml_branch_coverage=1 00:25:02.007 --rc genhtml_function_coverage=1 00:25:02.007 --rc genhtml_legend=1 00:25:02.007 --rc geninfo_all_blocks=1 00:25:02.007 --rc geninfo_unexecuted_blocks=1 00:25:02.007 00:25:02.007 ' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.007 --rc genhtml_branch_coverage=1 00:25:02.007 --rc genhtml_function_coverage=1 00:25:02.007 --rc genhtml_legend=1 00:25:02.007 --rc geninfo_all_blocks=1 00:25:02.007 --rc geninfo_unexecuted_blocks=1 00:25:02.007 00:25:02.007 ' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.007 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.008 12:39:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:25:08.578 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:25:08.578 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:25:08.578 Found net devices under 0000:1a:00.0: cvl_0_0 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.578 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:25:08.579 Found net devices under 0000:1a:00.1: cvl_0_1 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:25:08.579 00:25:08.579 --- 10.0.0.2 ping statistics --- 00:25:08.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.579 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:08.579 00:25:08.579 --- 10.0.0.1 ping statistics --- 00:25:08.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.579 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1023419 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1023419 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1023419 ']' 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.579 12:39:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.579 [2024-11-20 12:39:13.731504] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:08.579 [2024-11-20 12:39:13.731545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.579 [2024-11-20 12:39:13.807715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.579 [2024-11-20 12:39:13.846726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.579 [2024-11-20 12:39:13.846761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.579 [2024-11-20 12:39:13.846768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.579 [2024-11-20 12:39:13.846773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.579 [2024-11-20 12:39:13.846781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.579 [2024-11-20 12:39:13.848340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.579 [2024-11-20 12:39:13.848467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.579 [2024-11-20 12:39:13.848507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.579 [2024-11-20 12:39:13.848507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:08.839 12:39:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:21.052 [2024-11-20 12:39:26.626139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.052 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.312 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:21.312 12:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.312 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:21.312 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:21.571 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.830 [2024-11-20 12:39:27.358520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.830 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:21.830 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:21.830 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:21.830 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:21.830 12:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:23.207 Initializing NVMe Controllers 00:25:23.207 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:23.207 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:23.207 Initialization complete. Launching workers. 00:25:23.207 ======================================================== 00:25:23.207 Latency(us) 00:25:23.207 Device Information : IOPS MiB/s Average min max 00:25:23.207 PCIE (0000:5e:00.0) NSID 1 from core 0: 107351.26 419.34 297.68 9.64 6179.23 00:25:23.207 ======================================================== 00:25:23.207 Total : 107351.26 419.34 297.68 9.64 6179.23 00:25:23.207 00:25:23.207 12:39:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:24.584 Initializing NVMe Controllers 00:25:24.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:24.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:24.584 Initialization complete. Launching workers. 00:25:24.584 ======================================================== 00:25:24.584 Latency(us) 00:25:24.584 Device Information : IOPS MiB/s Average min max 00:25:24.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 228.87 0.89 4438.66 94.30 48368.79 00:25:24.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.96 0.25 15757.73 7941.55 59854.25 00:25:24.584 ======================================================== 00:25:24.584 Total : 292.84 1.14 6911.09 94.30 59854.25 00:25:24.584 00:25:24.584 12:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.522 Initializing NVMe Controllers 00:25:25.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.522 Initialization complete. Launching workers. 00:25:25.522 ======================================================== 00:25:25.522 Latency(us) 00:25:25.522 Device Information : IOPS MiB/s Average min max 00:25:25.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12504.83 48.85 2559.47 265.38 7561.06 00:25:25.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3830.47 14.96 8355.27 5074.45 16138.47 00:25:25.522 ======================================================== 00:25:25.522 Total : 16335.30 63.81 3918.53 265.38 16138.47 00:25:25.522 00:25:25.781 12:39:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:25.781 12:39:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:25.781 12:39:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.316 Initializing NVMe Controllers 00:25:28.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.316 Controller IO queue size 128, less than required. 00:25:28.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.316 Controller IO queue size 128, less than required. 00:25:28.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:28.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:28.316 Initialization complete. Launching workers. 00:25:28.316 ======================================================== 00:25:28.316 Latency(us) 00:25:28.316 Device Information : IOPS MiB/s Average min max 00:25:28.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2398.97 599.74 54157.35 30030.78 95021.01 00:25:28.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.49 152.62 216084.07 78001.01 297993.34 00:25:28.316 ======================================================== 00:25:28.316 Total : 3009.46 752.37 87005.41 30030.78 297993.34 00:25:28.316 00:25:28.316 12:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:28.316 No valid NVMe controllers or AIO or URING devices found 00:25:28.316 Initializing NVMe Controllers 00:25:28.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.316 Controller IO queue size 128, less than required. 00:25:28.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.316 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:28.316 Controller IO queue size 128, less than required. 00:25:28.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.316 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:28.316 WARNING: Some requested NVMe devices were skipped 00:25:28.316 12:39:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:30.851 Initializing NVMe Controllers 00:25:30.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.851 Controller IO queue size 128, less than required. 00:25:30.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.851 Controller IO queue size 128, less than required. 00:25:30.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.851 Initialization complete. Launching workers. 00:25:30.851 00:25:30.851 ==================== 00:25:30.851 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:30.851 TCP transport: 00:25:30.851 polls: 19310 00:25:30.851 idle_polls: 12731 00:25:30.851 sock_completions: 6579 00:25:30.851 nvme_completions: 8363 00:25:30.851 submitted_requests: 12646 00:25:30.851 queued_requests: 1 00:25:30.851 00:25:30.851 ==================== 00:25:30.851 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:30.851 TCP transport: 00:25:30.851 polls: 24600 00:25:30.851 idle_polls: 16772 00:25:30.851 sock_completions: 7828 00:25:30.851 nvme_completions: 8975 00:25:30.851 submitted_requests: 13532 00:25:30.851 queued_requests: 1 00:25:30.851 ======================================================== 00:25:30.851 Latency(us) 00:25:30.851 Device Information : IOPS MiB/s Average min max 00:25:30.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2089.98 522.50 62216.64 41186.47 109268.86 00:25:30.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2242.94 560.74 57673.81 25214.38 91940.00 00:25:30.851 ======================================================== 00:25:30.851 Total : 4332.93 1083.23 59865.04 25214.38 109268.86 00:25:30.851 00:25:30.851 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:30.851 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.110 rmmod nvme_tcp 00:25:31.110 rmmod nvme_fabrics 00:25:31.110 rmmod nvme_keyring 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1023419 ']' 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1023419 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1023419 ']' 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1023419 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1023419 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1023419' 00:25:31.110 killing process with pid 1023419 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1023419 00:25:31.110 12:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1023419 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.395 12:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.395 12:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.395 12:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.929 00:25:36.929 real 0m34.729s 00:25:36.929 user 1m41.626s 00:25:36.929 sys 0m8.826s 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:36.929 ************************************ 00:25:36.929 END TEST nvmf_perf 00:25:36.929 ************************************ 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.929 ************************************ 00:25:36.929 START TEST nvmf_fio_host 00:25:36.929 ************************************ 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:36.929 * Looking for test storage... 00:25:36.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:36.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.929 --rc genhtml_branch_coverage=1 00:25:36.929 --rc genhtml_function_coverage=1 00:25:36.929 --rc genhtml_legend=1 00:25:36.929 --rc geninfo_all_blocks=1 00:25:36.929 --rc geninfo_unexecuted_blocks=1 00:25:36.929 00:25:36.929 ' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:36.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.929 --rc genhtml_branch_coverage=1 00:25:36.929 --rc genhtml_function_coverage=1 00:25:36.929 --rc genhtml_legend=1 00:25:36.929 --rc geninfo_all_blocks=1 00:25:36.929 --rc geninfo_unexecuted_blocks=1 00:25:36.929 00:25:36.929 ' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:36.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.929 --rc genhtml_branch_coverage=1 00:25:36.929 --rc genhtml_function_coverage=1 00:25:36.929 --rc genhtml_legend=1 00:25:36.929 --rc geninfo_all_blocks=1 00:25:36.929 --rc geninfo_unexecuted_blocks=1 00:25:36.929 00:25:36.929 ' 00:25:36.929 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:36.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.929 --rc genhtml_branch_coverage=1 00:25:36.929 --rc genhtml_function_coverage=1 00:25:36.930 --rc genhtml_legend=1 00:25:36.930 --rc geninfo_all_blocks=1 00:25:36.930 --rc geninfo_unexecuted_blocks=1 00:25:36.930 00:25:36.930 ' 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.930 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.931 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.931 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.931 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.931 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.931 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.931 12:39:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:25:43.503 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:25:43.503 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:25:43.503 Found net devices under 0000:1a:00.0: cvl_0_0 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:25:43.503 Found net devices under 0000:1a:00.1: cvl_0_1 00:25:43.503 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:25:43.504 00:25:43.504 --- 10.0.0.2 ping statistics --- 00:25:43.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.504 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:25:43.504 00:25:43.504 --- 10.0.0.1 ping statistics --- 00:25:43.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.504 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1031721 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1031721 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1031721 ']' 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.504 12:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.504 [2024-11-20 12:39:48.572714] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:43.504 [2024-11-20 12:39:48.572753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.504 [2024-11-20 12:39:48.649323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.504 [2024-11-20 12:39:48.688726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.504 [2024-11-20 12:39:48.688763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.504 [2024-11-20 12:39:48.688769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.504 [2024-11-20 12:39:48.688774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.504 [2024-11-20 12:39:48.688780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.504 [2024-11-20 12:39:48.690448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.504 [2024-11-20 12:39:48.690504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.504 [2024-11-20 12:39:48.690620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.504 [2024-11-20 12:39:48.690621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.762 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.762 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:43.762 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:44.021 [2024-11-20 12:39:49.543248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.021 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:44.021 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.021 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.021 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:44.280 Malloc1 00:25:44.280 12:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:44.280 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:44.539 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.797 [2024-11-20 12:39:50.367664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.797 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:45.069 12:39:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:45.332 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:45.332 fio-3.35 00:25:45.332 Starting 1 thread 00:25:47.893 00:25:47.893 test: (groupid=0, jobs=1): err= 0: pid=1032389: Wed Nov 20 12:39:53 2024 00:25:47.893 read: IOPS=12.9k, BW=50.4MiB/s (52.8MB/s)(101MiB/2005msec) 00:25:47.893 slat (nsec): min=1302, max=200471, avg=1472.20, stdev=1710.75 00:25:47.893 clat (usec): min=2482, max=9402, avg=5451.31, stdev=398.33 00:25:47.893 lat (usec): min=2512, max=9404, avg=5452.78, stdev=398.25 00:25:47.893 clat percentiles (usec): 00:25:47.893 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:25:47.893 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:25:47.893 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:25:47.893 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 6849], 99.95th=[ 8160], 00:25:47.893 | 99.99th=[ 9241] 00:25:47.894 bw ( KiB/s): min=50064, max=52312, per=100.00%, avg=51606.00, stdev=1036.93, samples=4 00:25:47.894 iops : min=12516, max=13078, avg=12901.50, stdev=259.23, samples=4 00:25:47.894 write: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(101MiB/2005msec); 0 zone resets 00:25:47.894 slat (nsec): min=1340, max=179324, avg=1525.18, stdev=1243.86 00:25:47.894 clat (usec): min=1914, max=8864, avg=4407.54, stdev=339.23 00:25:47.894 lat (usec): min=1926, max=8866, avg=4409.06, stdev=339.20 00:25:47.894 clat percentiles (usec): 00:25:47.894 | 1.00th=[ 3621], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4146], 00:25:47.894 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:25:47.894 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4883], 00:25:47.894 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 6915], 99.95th=[ 8029], 00:25:47.894 | 99.99th=[ 8848] 00:25:47.894 bw ( KiB/s): min=50568, max=52128, per=100.00%, avg=51522.00, stdev=679.24, samples=4 00:25:47.894 iops : min=12642, max=13032, avg=12880.50, stdev=169.81, samples=4 00:25:47.894 lat (msec) : 2=0.01%, 4=4.64%, 10=95.36% 00:25:47.894 cpu : usr=68.96%, sys=29.69%, ctx=73, majf=0, minf=7 00:25:47.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:47.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:47.894 issued rwts: total=25865,25826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:47.894 00:25:47.894 Run status group 0 (all jobs): 00:25:47.894 READ: bw=50.4MiB/s (52.8MB/s), 50.4MiB/s-50.4MiB/s (52.8MB/s-52.8MB/s), io=101MiB (106MB), run=2005-2005msec 00:25:47.894 WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=101MiB (106MB), run=2005-2005msec 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:47.894 12:39:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:48.158 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:48.158 fio-3.35 00:25:48.158 Starting 1 thread 00:25:50.695 00:25:50.695 test: (groupid=0, jobs=1): err= 0: pid=1033034: Wed Nov 20 12:39:55 2024 00:25:50.695 read: IOPS=12.0k, BW=188MiB/s (197MB/s)(376MiB/2005msec) 00:25:50.695 slat (nsec): min=2163, max=88085, avg=2526.92, stdev=1227.66 00:25:50.695 clat (usec): min=1633, max=49523, avg=6289.43, stdev=3216.61 00:25:50.695 lat (usec): min=1636, max=49526, avg=6291.95, stdev=3216.67 00:25:50.695 clat percentiles (usec): 00:25:50.695 | 1.00th=[ 3228], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4883], 00:25:50.695 | 30.00th=[ 5276], 40.00th=[ 5669], 50.00th=[ 6063], 60.00th=[ 6456], 00:25:50.695 | 70.00th=[ 6915], 80.00th=[ 7308], 90.00th=[ 7832], 95.00th=[ 8455], 00:25:50.695 | 99.00th=[10028], 99.50th=[42730], 99.90th=[47973], 99.95th=[49021], 00:25:50.695 | 99.99th=[49546] 00:25:50.695 bw ( KiB/s): min=90048, max=97312, per=49.35%, avg=94776.00, stdev=3224.27, samples=4 00:25:50.695 iops : min= 5628, max= 6082, avg=5923.50, stdev=201.52, samples=4 00:25:50.695 write: IOPS=7113, BW=111MiB/s (117MB/s)(192MiB/1731msec); 0 zone resets 00:25:50.695 slat (usec): min=25, max=381, avg=27.41, stdev= 7.04 00:25:50.695 clat (usec): min=2788, max=13322, avg=7634.44, stdev=1383.83 00:25:50.695 lat (usec): min=2816, max=13433, avg=7661.85, stdev=1385.48 00:25:50.695 clat percentiles (usec): 00:25:50.695 | 1.00th=[ 5080], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6521], 00:25:50.695 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7767], 00:25:50.695 | 70.00th=[ 8160], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10159], 00:25:50.695 | 99.00th=[11469], 99.50th=[11863], 99.90th=[13042], 99.95th=[13173], 00:25:50.695 | 99.99th=[13304] 00:25:50.695 bw ( KiB/s): min=93632, max=100960, per=86.55%, avg=98504.00, stdev=3300.85, samples=4 00:25:50.695 iops : min= 5852, max= 6310, avg=6156.50, stdev=206.30, samples=4 00:25:50.695 lat (msec) : 2=0.03%, 4=4.59%, 10=92.58%, 20=2.45%, 50=0.35% 00:25:50.695 cpu : usr=79.84%, sys=19.16%, ctx=46, majf=0, minf=3 00:25:50.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:50.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:50.695 issued rwts: total=24068,12313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:50.695 00:25:50.695 Run status group 0 (all jobs): 00:25:50.695 READ: bw=188MiB/s (197MB/s), 188MiB/s-188MiB/s (197MB/s-197MB/s), io=376MiB (394MB), run=2005-2005msec 00:25:50.695 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=192MiB (202MB), run=1731-1731msec 00:25:50.695 12:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.695 rmmod nvme_tcp 00:25:50.695 rmmod nvme_fabrics 00:25:50.695 rmmod nvme_keyring 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1031721 ']' 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1031721 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1031721 ']' 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1031721 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1031721 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1031721' 00:25:50.695 killing process with pid 1031721 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1031721 00:25:50.695 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1031721 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.955 12:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.859 12:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.859 00:25:52.859 real 0m16.442s 00:25:52.859 user 0m54.249s 00:25:52.859 sys 0m6.770s 00:25:52.859 12:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.859 12:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.859 ************************************ 00:25:52.859 END TEST nvmf_fio_host 00:25:52.859 ************************************ 00:25:53.118 12:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:53.118 12:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.119 ************************************ 00:25:53.119 START TEST nvmf_failover 00:25:53.119 ************************************ 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:53.119 * Looking for test storage... 00:25:53.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.119 --rc genhtml_branch_coverage=1 00:25:53.119 --rc genhtml_function_coverage=1 00:25:53.119 --rc genhtml_legend=1 00:25:53.119 --rc geninfo_all_blocks=1 00:25:53.119 --rc geninfo_unexecuted_blocks=1 00:25:53.119 00:25:53.119 ' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.119 --rc genhtml_branch_coverage=1 00:25:53.119 --rc genhtml_function_coverage=1 00:25:53.119 --rc genhtml_legend=1 00:25:53.119 --rc geninfo_all_blocks=1 00:25:53.119 --rc geninfo_unexecuted_blocks=1 00:25:53.119 00:25:53.119 ' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.119 --rc genhtml_branch_coverage=1 00:25:53.119 --rc genhtml_function_coverage=1 00:25:53.119 --rc genhtml_legend=1 00:25:53.119 --rc geninfo_all_blocks=1 00:25:53.119 --rc geninfo_unexecuted_blocks=1 00:25:53.119 00:25:53.119 ' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.119 --rc genhtml_branch_coverage=1 00:25:53.119 --rc genhtml_function_coverage=1 00:25:53.119 --rc genhtml_legend=1 00:25:53.119 --rc geninfo_all_blocks=1 00:25:53.119 --rc geninfo_unexecuted_blocks=1 00:25:53.119 00:25:53.119 ' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.119 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.120 12:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:25:59.690 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:25:59.690 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:25:59.691 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:25:59.691 Found net devices under 0000:1a:00.0: cvl_0_0 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:25:59.691 Found net devices under 0000:1a:00.1: cvl_0_1 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:25:59.691 00:25:59.691 --- 10.0.0.2 ping statistics --- 00:25:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.691 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:25:59.691 00:25:59.691 --- 10.0.0.1 ping statistics --- 00:25:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.691 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.691 12:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1037070 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1037070 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1037070 ']' 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.691 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.691 [2024-11-20 12:40:05.094358] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:59.691 [2024-11-20 12:40:05.094406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.691 [2024-11-20 12:40:05.169539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:59.691 [2024-11-20 12:40:05.208509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.691 [2024-11-20 12:40:05.208542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.691 [2024-11-20 12:40:05.208549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.691 [2024-11-20 12:40:05.208554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.691 [2024-11-20 12:40:05.208558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.691 [2024-11-20 12:40:05.210047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.691 [2024-11-20 12:40:05.210159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.691 [2024-11-20 12:40:05.210160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.261 12:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:00.520 [2024-11-20 12:40:06.101661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.520 12:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:00.809 Malloc0 00:26:00.809 12:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.809 12:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.068 12:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.328 [2024-11-20 12:40:06.871770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.328 12:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.328 [2024-11-20 12:40:07.064329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:01.586 [2024-11-20 12:40:07.248916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1037614 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1037614 /var/tmp/bdevperf.sock 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1037614 ']' 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.586 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:01.844 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.844 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:01.844 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:02.102 NVMe0n1 00:26:02.102 12:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:02.360 00:26:02.360 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1037625 00:26:02.360 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:02.360 12:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:03.294 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.553 [2024-11-20 12:40:09.216970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.553 [2024-11-20 12:40:09.217094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 [2024-11-20 12:40:09.217418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120710 is same with the state(6) to be set 00:26:03.554 12:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:06.841 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:07.107 00:26:07.107 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:07.107 12:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:10.398 12:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.398 [2024-11-20 12:40:15.993450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.398 12:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:11.335 12:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:11.594 12:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1037625 00:26:18.379 { 00:26:18.379 "results": [ 00:26:18.379 { 00:26:18.379 "job": "NVMe0n1", 00:26:18.379 "core_mask": "0x1", 00:26:18.379 "workload": "verify", 00:26:18.379 "status": "finished", 00:26:18.379 "verify_range": { 00:26:18.379 "start": 0, 00:26:18.379 "length": 16384 00:26:18.379 }, 00:26:18.379 "queue_depth": 128, 00:26:18.380 "io_size": 4096, 00:26:18.380 "runtime": 15.009012, 00:26:18.380 "iops": 12553.391255866809, 00:26:18.380 "mibps": 49.03668459322972, 00:26:18.380 "io_failed": 7149, 00:26:18.380 "io_timeout": 0, 00:26:18.380 "avg_latency_us": 9803.672727607425, 00:26:18.380 "min_latency_us": 381.6727272727273, 00:26:18.380 "max_latency_us": 13107.2 00:26:18.380 } 00:26:18.380 ], 00:26:18.380 "core_count": 1 00:26:18.380 } 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1037614 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1037614 ']' 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1037614 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1037614 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1037614' 00:26:18.380 killing process with pid 1037614 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1037614 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1037614 00:26:18.380 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:18.380 [2024-11-20 12:40:07.306177] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:18.380 [2024-11-20 12:40:07.306225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037614 ] 00:26:18.380 [2024-11-20 12:40:07.379519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.380 [2024-11-20 12:40:07.418145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.380 Running I/O for 15 seconds... 00:26:18.380 12530.00 IOPS, 48.95 MiB/s [2024-11-20T11:40:24.144Z] [2024-11-20 12:40:09.218454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-11-20 12:40:09.218589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-11-20 12:40:09.218876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.380 [2024-11-20 12:40:09.218881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.218992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.218998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.381 [2024-11-20 12:40:09.219063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.381 [2024-11-20 12:40:09.219391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.381 [2024-11-20 12:40:09.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.382 [2024-11-20 12:40:09.219736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.382 [2024-11-20 12:40:09.219917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.382 [2024-11-20 12:40:09.219924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.219929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.219936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.219942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.219949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.219956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.219963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.219970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.219977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.219982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.219989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.219995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.383 [2024-11-20 12:40:09.220097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.383 [2024-11-20 12:40:09.220122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 PRP1 0x0 PRP2 0x0 00:26:18.383 [2024-11-20 12:40:09.220128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.383 [2024-11-20 12:40:09.220142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.383 [2024-11-20 12:40:09.220147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110160 len:8 PRP1 0x0 PRP2 0x0 00:26:18.383 [2024-11-20 12:40:09.220153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.383 [2024-11-20 12:40:09.220163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.383 [2024-11-20 12:40:09.220168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110168 len:8 PRP1 0x0 PRP2 0x0 00:26:18.383 [2024-11-20 12:40:09.220173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.383 [2024-11-20 12:40:09.220185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.383 [2024-11-20 12:40:09.220190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110176 len:8 PRP1 0x0 PRP2 0x0 00:26:18.383 [2024-11-20 12:40:09.220196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.383 [2024-11-20 12:40:09.220206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.383 [2024-11-20 12:40:09.220210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110184 len:8 PRP1 0x0 PRP2 0x0 00:26:18.383 [2024-11-20 12:40:09.220216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220256] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:18.383 [2024-11-20 12:40:09.220275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.383 [2024-11-20 12:40:09.220282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.383 [2024-11-20 12:40:09.220294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.383 [2024-11-20 12:40:09.220306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.383 [2024-11-20 12:40:09.220318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:09.220324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:18.383 [2024-11-20 12:40:09.222838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:18.383 [2024-11-20 12:40:09.222865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489780 (9): Bad file descriptor 00:26:18.383 [2024-11-20 12:40:09.248515] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:18.383 12396.50 IOPS, 48.42 MiB/s [2024-11-20T11:40:24.147Z] 12521.33 IOPS, 48.91 MiB/s [2024-11-20T11:40:24.147Z] 12590.00 IOPS, 49.18 MiB/s [2024-11-20T11:40:24.147Z] [2024-11-20 12:40:12.795033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.383 [2024-11-20 12:40:12.795202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.383 [2024-11-20 12:40:12.795209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.384 [2024-11-20 12:40:12.795642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.384 [2024-11-20 12:40:12.795649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.795866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.795991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.795997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.385 [2024-11-20 12:40:12.796089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.796102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.796115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.796128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.796153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.385 [2024-11-20 12:40:12.796160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.385 [2024-11-20 12:40:12.796166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.386 [2024-11-20 12:40:12.796693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.386 [2024-11-20 12:40:12.796698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:12.796711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:12.796724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:12.796737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:12.796751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:12.796764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.387 [2024-11-20 12:40:12.796792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.387 [2024-11-20 12:40:12.796798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87656 len:8 PRP1 0x0 PRP2 0x0 00:26:18.387 [2024-11-20 12:40:12.796805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796846] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:18.387 [2024-11-20 12:40:12.796866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.387 [2024-11-20 12:40:12.796873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.387 [2024-11-20 12:40:12.796885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.387 [2024-11-20 12:40:12.796897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.387 [2024-11-20 12:40:12.796910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:12.796915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:18.387 [2024-11-20 12:40:12.796937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489780 (9): Bad file descriptor 00:26:18.387 [2024-11-20 12:40:12.799466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:18.387 [2024-11-20 12:40:12.869246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:18.387 12394.60 IOPS, 48.42 MiB/s [2024-11-20T11:40:24.151Z] 12435.67 IOPS, 48.58 MiB/s [2024-11-20T11:40:24.151Z] 12485.86 IOPS, 48.77 MiB/s [2024-11-20T11:40:24.151Z] 12519.12 IOPS, 48.90 MiB/s [2024-11-20T11:40:24.151Z] 12532.44 IOPS, 48.95 MiB/s [2024-11-20T11:40:24.151Z] [2024-11-20 12:40:17.191713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.191987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.191993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.192006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.192019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.387 [2024-11-20 12:40:17.192032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-11-20 12:40:17.192046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-11-20 12:40:17.192059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-11-20 12:40:17.192072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-11-20 12:40:17.192084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.387 [2024-11-20 12:40:17.192092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.387 [2024-11-20 12:40:17.192098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.388 [2024-11-20 12:40:17.192140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.388 [2024-11-20 12:40:17.192366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.388 [2024-11-20 12:40:17.192583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.388 [2024-11-20 12:40:17.192590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.389 [2024-11-20 12:40:17.192635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.389 [2024-11-20 12:40:17.192650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.389 [2024-11-20 12:40:17.192662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.389 [2024-11-20 12:40:17.192675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.389 [2024-11-20 12:40:17.192688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.389 [2024-11-20 12:40:17.192701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.192992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.192999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.389 [2024-11-20 12:40:17.193082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.389 [2024-11-20 12:40:17.193090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.390 [2024-11-20 12:40:17.193239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.390 [2024-11-20 12:40:17.193440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b6c80 is same with the state(6) to be set 00:26:18.390 [2024-11-20 12:40:17.193455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:18.390 [2024-11-20 12:40:17.193460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:18.390 [2024-11-20 12:40:17.193467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31864 len:8 PRP1 0x0 PRP2 0x0 00:26:18.390 [2024-11-20 12:40:17.193472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193516] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:18.390 [2024-11-20 12:40:17.193536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.390 [2024-11-20 12:40:17.193543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.390 [2024-11-20 12:40:17.193556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.390 [2024-11-20 12:40:17.193568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.390 [2024-11-20 12:40:17.193580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.390 [2024-11-20 12:40:17.193586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:18.390 [2024-11-20 12:40:17.196135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:18.390 [2024-11-20 12:40:17.196161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489780 (9): Bad file descriptor 00:26:18.390 [2024-11-20 12:40:17.226262] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:18.390 12500.50 IOPS, 48.83 MiB/s [2024-11-20T11:40:24.154Z] 12522.91 IOPS, 48.92 MiB/s [2024-11-20T11:40:24.154Z] 12531.58 IOPS, 48.95 MiB/s [2024-11-20T11:40:24.154Z] 12538.00 IOPS, 48.98 MiB/s [2024-11-20T11:40:24.154Z] 12538.07 IOPS, 48.98 MiB/s [2024-11-20T11:40:24.154Z] 12552.40 IOPS, 49.03 MiB/s 00:26:18.390 Latency(us) 00:26:18.390 [2024-11-20T11:40:24.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.390 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:18.390 Verification LBA range: start 0x0 length 0x4000 00:26:18.390 NVMe0n1 : 15.01 12553.39 49.04 476.31 0.00 9803.67 381.67 13107.20 00:26:18.390 [2024-11-20T11:40:24.154Z] =================================================================================================================== 00:26:18.390 [2024-11-20T11:40:24.154Z] Total : 12553.39 49.04 476.31 0.00 9803.67 381.67 13107.20 00:26:18.390 Received shutdown signal, test time was about 15.000000 seconds 00:26:18.390 00:26:18.390 Latency(us) 00:26:18.390 [2024-11-20T11:40:24.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.391 [2024-11-20T11:40:24.155Z] =================================================================================================================== 00:26:18.391 [2024-11-20T11:40:24.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1040474 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1040474 /var/tmp/bdevperf.sock 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1040474 ']' 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:18.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:18.391 [2024-11-20 12:40:23.843901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:18.391 12:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:18.391 [2024-11-20 12:40:24.020389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:18.391 12:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:18.649 NVMe0n1 00:26:18.649 12:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:18.908 00:26:18.908 12:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:19.166 00:26:19.166 12:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:19.166 12:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:19.430 12:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.688 12:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:22.977 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:22.977 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:22.977 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1041310 00:26:22.977 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:22.977 12:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1041310 00:26:23.928 { 00:26:23.928 "results": [ 00:26:23.928 { 00:26:23.928 "job": "NVMe0n1", 00:26:23.928 "core_mask": "0x1", 00:26:23.928 "workload": "verify", 00:26:23.928 "status": "finished", 00:26:23.928 "verify_range": { 00:26:23.928 "start": 0, 00:26:23.928 "length": 16384 00:26:23.928 }, 00:26:23.928 "queue_depth": 128, 00:26:23.928 "io_size": 4096, 00:26:23.928 "runtime": 1.01037, 00:26:23.928 "iops": 12622.10873244455, 00:26:23.928 "mibps": 49.305112236111526, 00:26:23.928 "io_failed": 0, 00:26:23.928 "io_timeout": 0, 00:26:23.928 "avg_latency_us": 10106.126031521995, 00:26:23.928 "min_latency_us": 2129.92, 00:26:23.928 "max_latency_us": 14417.92 00:26:23.928 } 00:26:23.928 ], 00:26:23.928 "core_count": 1 00:26:23.928 } 00:26:23.928 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:23.928 [2024-11-20 12:40:23.474374] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:23.928 [2024-11-20 12:40:23.474430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040474 ] 00:26:23.928 [2024-11-20 12:40:23.549146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.928 [2024-11-20 12:40:23.583988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.928 [2024-11-20 12:40:25.172445] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:23.928 [2024-11-20 12:40:25.172486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.928 [2024-11-20 12:40:25.172496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.928 [2024-11-20 12:40:25.172504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.928 [2024-11-20 12:40:25.172511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.928 [2024-11-20 12:40:25.172517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.928 [2024-11-20 12:40:25.172523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.928 [2024-11-20 12:40:25.172530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.928 [2024-11-20 12:40:25.172536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.928 [2024-11-20 12:40:25.172542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:23.928 [2024-11-20 12:40:25.172564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:23.928 [2024-11-20 12:40:25.172577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5a780 (9): Bad file descriptor 00:26:23.928 [2024-11-20 12:40:25.264571] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:23.928 Running I/O for 1 seconds... 00:26:23.928 12625.00 IOPS, 49.32 MiB/s 00:26:23.928 Latency(us) 00:26:23.928 [2024-11-20T11:40:29.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.928 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:23.928 Verification LBA range: start 0x0 length 0x4000 00:26:23.928 NVMe0n1 : 1.01 12622.11 49.31 0.00 0.00 10106.13 2129.92 14417.92 00:26:23.928 [2024-11-20T11:40:29.692Z] =================================================================================================================== 00:26:23.928 [2024-11-20T11:40:29.692Z] Total : 12622.11 49.31 0.00 0.00 10106.13 2129.92 14417.92 00:26:23.928 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:23.928 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:24.188 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.188 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.188 12:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:24.447 12:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.706 12:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1040474 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1040474 ']' 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1040474 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1040474 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1040474' 00:26:27.993 killing process with pid 1040474 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1040474 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1040474 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:27.993 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.253 rmmod nvme_tcp 00:26:28.253 rmmod nvme_fabrics 00:26:28.253 rmmod nvme_keyring 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1037070 ']' 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1037070 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1037070 ']' 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1037070 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1037070 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1037070' 00:26:28.253 killing process with pid 1037070 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1037070 00:26:28.253 12:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1037070 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.512 12:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.049 00:26:31.049 real 0m37.553s 00:26:31.049 user 1m57.691s 00:26:31.049 sys 0m7.759s 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:31.049 ************************************ 00:26:31.049 END TEST nvmf_failover 00:26:31.049 ************************************ 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.049 ************************************ 00:26:31.049 START TEST nvmf_host_discovery 00:26:31.049 ************************************ 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:31.049 * Looking for test storage... 00:26:31.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.049 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.050 --rc genhtml_branch_coverage=1 00:26:31.050 --rc genhtml_function_coverage=1 00:26:31.050 --rc genhtml_legend=1 00:26:31.050 --rc geninfo_all_blocks=1 00:26:31.050 --rc geninfo_unexecuted_blocks=1 00:26:31.050 00:26:31.050 ' 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.050 --rc genhtml_branch_coverage=1 00:26:31.050 --rc genhtml_function_coverage=1 00:26:31.050 --rc genhtml_legend=1 00:26:31.050 --rc geninfo_all_blocks=1 00:26:31.050 --rc geninfo_unexecuted_blocks=1 00:26:31.050 00:26:31.050 ' 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.050 --rc genhtml_branch_coverage=1 00:26:31.050 --rc genhtml_function_coverage=1 00:26:31.050 --rc genhtml_legend=1 00:26:31.050 --rc geninfo_all_blocks=1 00:26:31.050 --rc geninfo_unexecuted_blocks=1 00:26:31.050 00:26:31.050 ' 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:31.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.050 --rc genhtml_branch_coverage=1 00:26:31.050 --rc genhtml_function_coverage=1 00:26:31.050 --rc genhtml_legend=1 00:26:31.050 --rc geninfo_all_blocks=1 00:26:31.050 --rc geninfo_unexecuted_blocks=1 00:26:31.050 00:26:31.050 ' 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.050 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.051 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.625 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.625 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.625 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:26:37.626 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:26:37.626 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:26:37.626 Found net devices under 0000:1a:00.0: cvl_0_0 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:26:37.626 Found net devices under 0000:1a:00.1: cvl_0_1 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:37.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:26:37.626 00:26:37.626 --- 10.0.0.2 ping statistics --- 00:26:37.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.626 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:26:37.626 00:26:37.626 --- 10.0.0.1 ping statistics --- 00:26:37.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.626 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.626 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1045885 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1045885 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1045885 ']' 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.627 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.627 [2024-11-20 12:40:42.718911] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:37.627 [2024-11-20 12:40:42.718951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.627 [2024-11-20 12:40:42.795477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.627 [2024-11-20 12:40:42.831969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.627 [2024-11-20 12:40:42.831999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.627 [2024-11-20 12:40:42.832005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.627 [2024-11-20 12:40:42.832010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.627 [2024-11-20 12:40:42.832015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.627 [2024-11-20 12:40:42.832596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 [2024-11-20 12:40:43.566043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 [2024-11-20 12:40:43.578218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 null0 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 null1 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.886 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1046157 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1046157 /tmp/host.sock 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1046157 ']' 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:37.887 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.887 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 [2024-11-20 12:40:43.655367] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:38.146 [2024-11-20 12:40:43.655404] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046157 ] 00:26:38.146 [2024-11-20 12:40:43.730551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.146 [2024-11-20 12:40:43.768567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.714 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.973 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.974 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.233 [2024-11-20 12:40:44.777348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:39.233 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:39.234 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:39.801 [2024-11-20 12:40:45.475002] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:39.802 [2024-11-20 12:40:45.475019] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:39.802 [2024-11-20 12:40:45.475029] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.802 [2024-11-20 12:40:45.561285] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:40.060 [2024-11-20 12:40:45.739297] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:40.060 [2024-11-20 12:40:45.740106] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21d1210:1 started. 00:26:40.060 [2024-11-20 12:40:45.741429] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:40.060 [2024-11-20 12:40:45.741444] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.060 [2024-11-20 12:40:45.744190] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21d1210 was disconnected and freed. delete nvme_qpair. 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:40.318 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.318 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:40.577 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.577 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:40.577 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:40.578 [2024-11-20 12:40:46.181828] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21d15e0:1 started. 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.578 [2024-11-20 12:40:46.185209] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21d15e0 was disconnected and freed. delete nvme_qpair. 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 [2024-11-20 12:40:46.281343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.578 [2024-11-20 12:40:46.282075] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:40.578 [2024-11-20 12:40:46.282094] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:40.578 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.837 [2024-11-20 12:40:46.411482] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:40.837 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:40.837 [2024-11-20 12:40:46.510231] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:40.837 [2024-11-20 12:40:46.510262] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:40.837 [2024-11-20 12:40:46.510269] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.837 [2024-11-20 12:40:46.510274] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.773 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.773 [2024-11-20 12:40:47.533045] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:41.773 [2024-11-20 12:40:47.533067] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.773 [2024-11-20 12:40:47.533833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.774 [2024-11-20 12:40:47.533850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.774 [2024-11-20 12:40:47.533859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.774 [2024-11-20 12:40:47.533867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.774 [2024-11-20 12:40:47.533875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.774 [2024-11-20 12:40:47.533882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.774 [2024-11-20 12:40:47.533894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.774 [2024-11-20 12:40:47.533901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.774 [2024-11-20 12:40:47.533909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.035 [2024-11-20 12:40:47.543842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.035 [2024-11-20 12:40:47.553875] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.035 [2024-11-20 12:40:47.553888] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.035 [2024-11-20 12:40:47.553892] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.035 [2024-11-20 12:40:47.553896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.035 [2024-11-20 12:40:47.553918] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.035 [2024-11-20 12:40:47.554117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.035 [2024-11-20 12:40:47.554130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.035 [2024-11-20 12:40:47.554137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.035 [2024-11-20 12:40:47.554148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.035 [2024-11-20 12:40:47.554164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.035 [2024-11-20 12:40:47.554170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.035 [2024-11-20 12:40:47.554178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.035 [2024-11-20 12:40:47.554183] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.035 [2024-11-20 12:40:47.554188] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.035 [2024-11-20 12:40:47.554191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.035 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.035 [2024-11-20 12:40:47.563948] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.035 [2024-11-20 12:40:47.563959] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.035 [2024-11-20 12:40:47.563963] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.035 [2024-11-20 12:40:47.563967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.035 [2024-11-20 12:40:47.563979] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.035 [2024-11-20 12:40:47.564159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.035 [2024-11-20 12:40:47.564169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.035 [2024-11-20 12:40:47.564176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.035 [2024-11-20 12:40:47.564185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.035 [2024-11-20 12:40:47.564200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.035 [2024-11-20 12:40:47.564206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.035 [2024-11-20 12:40:47.564212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.036 [2024-11-20 12:40:47.564217] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.036 [2024-11-20 12:40:47.564221] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.036 [2024-11-20 12:40:47.564224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.036 [2024-11-20 12:40:47.574010] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.036 [2024-11-20 12:40:47.574020] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.036 [2024-11-20 12:40:47.574024] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.574028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.036 [2024-11-20 12:40:47.574039] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.574197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.036 [2024-11-20 12:40:47.574207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.036 [2024-11-20 12:40:47.574214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.036 [2024-11-20 12:40:47.574223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.036 [2024-11-20 12:40:47.574237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.036 [2024-11-20 12:40:47.574243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.036 [2024-11-20 12:40:47.574249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.036 [2024-11-20 12:40:47.574254] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.036 [2024-11-20 12:40:47.574261] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.036 [2024-11-20 12:40:47.574264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.036 [2024-11-20 12:40:47.584071] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.036 [2024-11-20 12:40:47.584085] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.036 [2024-11-20 12:40:47.584089] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.584092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.036 [2024-11-20 12:40:47.584106] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.584303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.036 [2024-11-20 12:40:47.584315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.036 [2024-11-20 12:40:47.584322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.036 [2024-11-20 12:40:47.584331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.036 [2024-11-20 12:40:47.584346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.036 [2024-11-20 12:40:47.584352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.036 [2024-11-20 12:40:47.584359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.036 [2024-11-20 12:40:47.584364] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.036 [2024-11-20 12:40:47.584368] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.036 [2024-11-20 12:40:47.584371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.036 [2024-11-20 12:40:47.594136] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.036 [2024-11-20 12:40:47.594147] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.036 [2024-11-20 12:40:47.594153] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.594160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.036 [2024-11-20 12:40:47.594173] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.036 [2024-11-20 12:40:47.594398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.036 [2024-11-20 12:40:47.594410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.036 [2024-11-20 12:40:47.594422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.036 [2024-11-20 12:40:47.594431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.036 [2024-11-20 12:40:47.594445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.036 [2024-11-20 12:40:47.594451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.036 [2024-11-20 12:40:47.594457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.036 [2024-11-20 12:40:47.594462] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.036 [2024-11-20 12:40:47.594466] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.036 [2024-11-20 12:40:47.594469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.036 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.036 [2024-11-20 12:40:47.604203] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.036 [2024-11-20 12:40:47.604217] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.036 [2024-11-20 12:40:47.604221] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.604224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.036 [2024-11-20 12:40:47.604237] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.036 [2024-11-20 12:40:47.604419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.036 [2024-11-20 12:40:47.604430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.037 [2024-11-20 12:40:47.604437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.037 [2024-11-20 12:40:47.604447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.037 [2024-11-20 12:40:47.604455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.037 [2024-11-20 12:40:47.604461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.037 [2024-11-20 12:40:47.604467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.037 [2024-11-20 12:40:47.604472] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.037 [2024-11-20 12:40:47.604476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.037 [2024-11-20 12:40:47.604480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.037 [2024-11-20 12:40:47.614267] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:42.037 [2024-11-20 12:40:47.614285] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:42.037 [2024-11-20 12:40:47.614289] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:42.037 [2024-11-20 12:40:47.614293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:42.037 [2024-11-20 12:40:47.614305] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.037 [2024-11-20 12:40:47.614542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.037 [2024-11-20 12:40:47.614555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a17d0 with addr=10.0.0.2, port=4420 00:26:42.037 [2024-11-20 12:40:47.614562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a17d0 is same with the state(6) to be set 00:26:42.037 [2024-11-20 12:40:47.614571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a17d0 (9): Bad file descriptor 00:26:42.037 [2024-11-20 12:40:47.614585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.037 [2024-11-20 12:40:47.614591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.037 [2024-11-20 12:40:47.614597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.037 [2024-11-20 12:40:47.614602] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.037 [2024-11-20 12:40:47.614606] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.037 [2024-11-20 12:40:47.614609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.037 [2024-11-20 12:40:47.619603] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:42.037 [2024-11-20 12:40:47.619618] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:42.037 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.038 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.297 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.298 12:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.235 [2024-11-20 12:40:48.943850] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.235 [2024-11-20 12:40:48.943866] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.235 [2024-11-20 12:40:48.943877] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.494 [2024-11-20 12:40:49.073262] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:43.753 [2024-11-20 12:40:49.380632] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:43.753 [2024-11-20 12:40:49.381248] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x21d75f0:1 started. 00:26:43.753 [2024-11-20 12:40:49.382789] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.753 [2024-11-20 12:40:49.382813] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:43.753 [2024-11-20 12:40:49.384002] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x21d75f0 was disconnected and freed. delete nvme_qpair. 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.753 request: 00:26:43.753 { 00:26:43.753 "name": "nvme", 00:26:43.753 "trtype": "tcp", 00:26:43.753 "traddr": "10.0.0.2", 00:26:43.753 "adrfam": "ipv4", 00:26:43.753 "trsvcid": "8009", 00:26:43.753 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:43.753 "wait_for_attach": true, 00:26:43.753 "method": "bdev_nvme_start_discovery", 00:26:43.753 "req_id": 1 00:26:43.753 } 00:26:43.753 Got JSON-RPC error response 00:26:43.753 response: 00:26:43.753 { 00:26:43.753 "code": -17, 00:26:43.753 "message": "File exists" 00:26:43.753 } 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.753 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.754 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.754 request: 00:26:43.754 { 00:26:43.754 "name": "nvme_second", 00:26:43.754 "trtype": "tcp", 00:26:43.754 "traddr": "10.0.0.2", 00:26:43.754 "adrfam": "ipv4", 00:26:43.754 "trsvcid": "8009", 00:26:43.754 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:43.754 "wait_for_attach": true, 00:26:44.013 "method": "bdev_nvme_start_discovery", 00:26:44.013 "req_id": 1 00:26:44.013 } 00:26:44.013 Got JSON-RPC error response 00:26:44.013 response: 00:26:44.013 { 00:26:44.013 "code": -17, 00:26:44.013 "message": "File exists" 00:26:44.013 } 00:26:44.013 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:44.013 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:44.013 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.014 12:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.951 [2024-11-20 12:40:50.623657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.951 [2024-11-20 12:40:50.623688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2620 with addr=10.0.0.2, port=8010 00:26:44.951 [2024-11-20 12:40:50.623706] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:44.951 [2024-11-20 12:40:50.623712] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:44.951 [2024-11-20 12:40:50.623718] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:45.888 [2024-11-20 12:40:51.626051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-20 12:40:51.626076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2620 with addr=10.0.0.2, port=8010 00:26:45.888 [2024-11-20 12:40:51.626087] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:45.888 [2024-11-20 12:40:51.626093] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:45.888 [2024-11-20 12:40:51.626099] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:47.266 [2024-11-20 12:40:52.628245] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:47.266 request: 00:26:47.266 { 00:26:47.266 "name": "nvme_second", 00:26:47.266 "trtype": "tcp", 00:26:47.266 "traddr": "10.0.0.2", 00:26:47.266 "adrfam": "ipv4", 00:26:47.266 "trsvcid": "8010", 00:26:47.266 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:47.266 "wait_for_attach": false, 00:26:47.266 "attach_timeout_ms": 3000, 00:26:47.266 "method": "bdev_nvme_start_discovery", 00:26:47.266 "req_id": 1 00:26:47.266 } 00:26:47.266 Got JSON-RPC error response 00:26:47.266 response: 00:26:47.266 { 00:26:47.266 "code": -110, 00:26:47.266 "message": "Connection timed out" 00:26:47.266 } 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1046157 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:47.266 rmmod nvme_tcp 00:26:47.266 rmmod nvme_fabrics 00:26:47.266 rmmod nvme_keyring 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1045885 ']' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1045885 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1045885 ']' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1045885 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045885 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045885' 00:26:47.266 killing process with pid 1045885 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1045885 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1045885 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.266 12:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:49.804 00:26:49.804 real 0m18.735s 00:26:49.804 user 0m22.856s 00:26:49.804 sys 0m6.009s 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.804 ************************************ 00:26:49.804 END TEST nvmf_host_discovery 00:26:49.804 ************************************ 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.804 ************************************ 00:26:49.804 START TEST nvmf_host_multipath_status 00:26:49.804 ************************************ 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:49.804 * Looking for test storage... 00:26:49.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:49.804 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:49.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.805 --rc genhtml_branch_coverage=1 00:26:49.805 --rc genhtml_function_coverage=1 00:26:49.805 --rc genhtml_legend=1 00:26:49.805 --rc geninfo_all_blocks=1 00:26:49.805 --rc geninfo_unexecuted_blocks=1 00:26:49.805 00:26:49.805 ' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:49.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.805 --rc genhtml_branch_coverage=1 00:26:49.805 --rc genhtml_function_coverage=1 00:26:49.805 --rc genhtml_legend=1 00:26:49.805 --rc geninfo_all_blocks=1 00:26:49.805 --rc geninfo_unexecuted_blocks=1 00:26:49.805 00:26:49.805 ' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:49.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.805 --rc genhtml_branch_coverage=1 00:26:49.805 --rc genhtml_function_coverage=1 00:26:49.805 --rc genhtml_legend=1 00:26:49.805 --rc geninfo_all_blocks=1 00:26:49.805 --rc geninfo_unexecuted_blocks=1 00:26:49.805 00:26:49.805 ' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:49.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.805 --rc genhtml_branch_coverage=1 00:26:49.805 --rc genhtml_function_coverage=1 00:26:49.805 --rc genhtml_legend=1 00:26:49.805 --rc geninfo_all_blocks=1 00:26:49.805 --rc geninfo_unexecuted_blocks=1 00:26:49.805 00:26:49.805 ' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.805 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.806 12:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.379 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:26:56.380 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:26:56.380 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:26:56.380 Found net devices under 0000:1a:00.0: cvl_0_0 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:26:56.380 Found net devices under 0000:1a:00.1: cvl_0_1 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:26:56.380 00:26:56.380 --- 10.0.0.2 ping statistics --- 00:26:56.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.380 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.825 ms 00:26:56.380 00:26:56.380 --- 10.0.0.1 ping statistics --- 00:26:56.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.380 rtt min/avg/max/mdev = 0.825/0.825/0.825/0.000 ms 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1051675 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1051675 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1051675 ']' 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.380 12:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.380 [2024-11-20 12:41:01.503119] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:56.381 [2024-11-20 12:41:01.503162] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.381 [2024-11-20 12:41:01.580381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:56.381 [2024-11-20 12:41:01.616448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.381 [2024-11-20 12:41:01.616492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.381 [2024-11-20 12:41:01.616499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.381 [2024-11-20 12:41:01.616504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.381 [2024-11-20 12:41:01.616509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.381 [2024-11-20 12:41:01.617796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.381 [2024-11-20 12:41:01.617797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1051675 00:26:56.640 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:56.899 [2024-11-20 12:41:02.506006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.899 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:57.158 Malloc0 00:26:57.158 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:57.417 12:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:57.417 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.676 [2024-11-20 12:41:03.278368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.676 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:57.936 [2024-11-20 12:41:03.462835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1051970 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1051970 /var/tmp/bdevperf.sock 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1051970 ']' 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:57.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.936 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.195 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.195 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:58.195 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:58.195 12:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:58.763 Nvme0n1 00:26:58.763 12:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:59.022 Nvme0n1 00:26:59.285 12:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:59.285 12:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:01.248 12:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:01.248 12:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:01.248 12:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:01.508 12:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:02.446 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:02.446 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:02.446 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.446 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.705 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.705 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:02.706 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.706 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.965 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.965 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.965 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.965 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.224 12:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.483 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.483 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.483 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.483 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.742 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.742 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:03.742 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:03.742 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.001 12:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:04.941 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:04.941 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:04.941 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.941 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.200 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.200 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:05.200 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.200 12:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.459 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.459 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.459 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.459 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.719 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.978 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.978 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:05.978 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.978 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.237 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.237 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:06.237 12:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:06.496 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:06.496 12:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:07.874 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:07.874 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.875 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.134 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.134 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.134 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.134 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.393 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.394 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:08.394 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.394 12:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:08.394 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.394 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:08.394 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.394 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.653 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.653 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:08.653 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:08.912 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:09.172 12:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:10.111 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:10.111 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:10.111 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.111 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:10.370 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.370 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:10.370 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.370 12:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:10.370 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:10.370 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:10.370 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.370 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.630 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.630 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.630 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.630 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.889 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.889 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.889 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.889 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:11.148 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.148 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:11.148 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.149 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:11.149 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.149 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:11.149 12:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:11.407 12:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:11.665 12:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:12.603 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:12.603 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:12.603 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.603 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.862 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.121 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.121 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.121 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.121 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:13.380 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.380 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:13.380 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.380 12:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:13.638 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:13.897 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:14.156 12:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:15.092 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:15.092 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:15.092 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.092 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.352 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.352 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:15.352 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.352 12:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.610 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.869 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.869 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:15.869 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.869 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.128 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.128 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:16.128 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.128 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.128 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.128 12:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:16.387 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:16.387 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:16.646 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:16.905 12:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:17.843 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:17.843 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:17.843 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.843 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.101 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.360 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.360 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.360 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.360 12:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.619 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.877 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.877 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:18.877 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.135 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.394 12:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:20.330 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:20.330 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:20.330 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.330 12:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.589 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.846 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.846 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.846 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.846 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.105 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.105 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.105 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.105 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.363 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.363 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.363 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.363 12:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.363 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.363 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:21.363 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:21.621 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:21.891 12:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:22.835 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:22.835 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:22.835 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.835 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.094 12:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.352 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.352 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.352 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.352 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.611 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.611 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.611 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.611 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:23.869 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:24.128 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:24.388 12:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:25.337 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:25.337 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:25.337 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.337 12:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.599 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.858 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.858 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.858 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.858 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.116 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.116 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.116 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.116 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.375 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.375 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:26.375 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.375 12:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1051970 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1051970 ']' 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1051970 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051970 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051970' 00:27:26.375 killing process with pid 1051970 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1051970 00:27:26.375 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1051970 00:27:26.375 { 00:27:26.375 "results": [ 00:27:26.375 { 00:27:26.375 "job": "Nvme0n1", 00:27:26.375 "core_mask": "0x4", 00:27:26.375 "workload": "verify", 00:27:26.375 "status": "terminated", 00:27:26.375 "verify_range": { 00:27:26.375 "start": 0, 00:27:26.375 "length": 16384 00:27:26.375 }, 00:27:26.375 "queue_depth": 128, 00:27:26.375 "io_size": 4096, 00:27:26.375 "runtime": 27.188443, 00:27:26.375 "iops": 11483.224692197342, 00:27:26.375 "mibps": 44.85634645389587, 00:27:26.375 "io_failed": 0, 00:27:26.375 "io_timeout": 0, 00:27:26.375 "avg_latency_us": 11126.86109814429, 00:27:26.375 "min_latency_us": 1087.3018181818181, 00:27:26.375 "max_latency_us": 3080906.938181818 00:27:26.375 } 00:27:26.376 ], 00:27:26.376 "core_count": 1 00:27:26.376 } 00:27:26.657 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1051970 00:27:26.657 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.657 [2024-11-20 12:41:03.535433] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:27:26.657 [2024-11-20 12:41:03.535494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051970 ] 00:27:26.657 [2024-11-20 12:41:03.607372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.657 [2024-11-20 12:41:03.645269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.657 Running I/O for 90 seconds... 00:27:26.657 12621.00 IOPS, 49.30 MiB/s [2024-11-20T11:41:32.421Z] 12650.50 IOPS, 49.42 MiB/s [2024-11-20T11:41:32.421Z] 12672.33 IOPS, 49.50 MiB/s [2024-11-20T11:41:32.421Z] 12664.75 IOPS, 49.47 MiB/s [2024-11-20T11:41:32.421Z] 12675.60 IOPS, 49.51 MiB/s [2024-11-20T11:41:32.421Z] 12631.67 IOPS, 49.34 MiB/s [2024-11-20T11:41:32.421Z] 12609.57 IOPS, 49.26 MiB/s [2024-11-20T11:41:32.421Z] 12593.00 IOPS, 49.19 MiB/s [2024-11-20T11:41:32.421Z] 12601.78 IOPS, 49.23 MiB/s [2024-11-20T11:41:32.421Z] 12581.80 IOPS, 49.15 MiB/s [2024-11-20T11:41:32.421Z] 12591.45 IOPS, 49.19 MiB/s [2024-11-20T11:41:32.421Z] 12583.08 IOPS, 49.15 MiB/s [2024-11-20T11:41:32.421Z] [2024-11-20 12:41:17.023964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.023998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.657 [2024-11-20 12:41:17.024479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.657 [2024-11-20 12:41:17.024485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.024872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.024878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.658 [2024-11-20 12:41:17.025560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.658 [2024-11-20 12:41:17.025571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.659 [2024-11-20 12:41:17.025797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.025984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.025991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.659 [2024-11-20 12:41:17.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.659 [2024-11-20 12:41:17.026230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.026606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.660 [2024-11-20 12:41:17.026611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.660 [2024-11-20 12:41:17.027421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.660 [2024-11-20 12:41:17.027428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.027847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.027854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.661 [2024-11-20 12:41:17.039374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.661 [2024-11-20 12:41:17.039385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.039984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.039995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.662 [2024-11-20 12:41:17.040361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.662 [2024-11-20 12:41:17.040481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.662 [2024-11-20 12:41:17.040487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.040984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.040990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.663 [2024-11-20 12:41:17.041155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.663 [2024-11-20 12:41:17.041161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.664 [2024-11-20 12:41:17.041792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.041982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.041997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.664 [2024-11-20 12:41:17.042673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.664 [2024-11-20 12:41:17.042687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.042971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.042979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.043987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.043996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.044010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.044019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.044034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.044042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.044061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.044069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.044084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.044092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.044107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.665 [2024-11-20 12:41:17.051220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.665 [2024-11-20 12:41:17.051246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.666 [2024-11-20 12:41:17.051257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.666 [2024-11-20 12:41:17.051288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.666 [2024-11-20 12:41:17.051320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.666 [2024-11-20 12:41:17.051351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.666 [2024-11-20 12:41:17.051382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.051980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.666 [2024-11-20 12:41:17.052306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.666 [2024-11-20 12:41:17.052327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.052806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.052818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.053805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.053827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.053851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.667 [2024-11-20 12:41:17.053862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.053882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.053893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.053914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.053925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.053946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.053957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.053977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.053988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.667 [2024-11-20 12:41:17.054526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.667 [2024-11-20 12:41:17.054537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.054978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.054999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.055427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.055440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.668 [2024-11-20 12:41:17.056479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.668 [2024-11-20 12:41:17.056492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.056990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.057001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.057032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.057063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.057094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.057127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.669 [2024-11-20 12:41:17.057159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.669 [2024-11-20 12:41:17.057740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.669 [2024-11-20 12:41:17.057750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.057970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.057990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.058561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.058572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.059540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.059575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.670 [2024-11-20 12:41:17.059606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.670 [2024-11-20 12:41:17.059641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.670 [2024-11-20 12:41:17.059672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.670 [2024-11-20 12:41:17.059703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.670 [2024-11-20 12:41:17.059735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.670 [2024-11-20 12:41:17.059754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.059974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.059985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.671 [2024-11-20 12:41:17.060906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.671 [2024-11-20 12:41:17.060927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.060937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.060958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.060969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.060989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.061986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.061993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.672 [2024-11-20 12:41:17.062306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.672 [2024-11-20 12:41:17.062320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.673 [2024-11-20 12:41:17.062327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.062980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.062987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.673 [2024-11-20 12:41:17.063126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.673 [2024-11-20 12:41:17.063134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.674 [2024-11-20 12:41:17.063976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.063990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.063997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.674 [2024-11-20 12:41:17.064565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.674 [2024-11-20 12:41:17.064573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.064969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.064976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.675 [2024-11-20 12:41:17.065729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.675 [2024-11-20 12:41:17.065742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.065981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.065994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.676 [2024-11-20 12:41:17.066172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.676 [2024-11-20 12:41:17.066530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.676 [2024-11-20 12:41:17.066537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.066984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.066992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.677 [2024-11-20 12:41:17.067812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.677 [2024-11-20 12:41:17.067833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.677 [2024-11-20 12:41:17.067847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.067983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.067997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.678 [2024-11-20 12:41:17.068662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.678 [2024-11-20 12:41:17.068669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.068683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.068691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.068704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.068711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.068725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.068732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.068746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.068754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.068767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.068774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.068788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.068795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.679 [2024-11-20 12:41:17.069852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.679 [2024-11-20 12:41:17.069859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.069873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.069881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.069894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.069902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.069917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.069924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.069938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.069947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.069960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.069968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.069981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.069989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.070010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.680 [2024-11-20 12:41:17.070032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.680 [2024-11-20 12:41:17.070700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.680 [2024-11-20 12:41:17.070710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.070919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.070927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.071573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.071597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.071621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.071643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.071664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.681 [2024-11-20 12:41:17.071685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.071980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.071993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.681 [2024-11-20 12:41:17.072139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.681 [2024-11-20 12:41:17.072149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.072992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.072998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.682 [2024-11-20 12:41:17.073208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.682 [2024-11-20 12:41:17.073219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.683 [2024-11-20 12:41:17.073559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.683 [2024-11-20 12:41:17.073874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.683 [2024-11-20 12:41:17.073885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.073902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.073918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.073936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.073953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.073970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.073987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.073994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.684 [2024-11-20 12:41:17.074851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.684 [2024-11-20 12:41:17.074868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.684 [2024-11-20 12:41:17.074885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.684 [2024-11-20 12:41:17.074895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.074903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.074914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.074920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.074931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.074948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.074954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.074965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.074971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.074982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.074988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.685 [2024-11-20 12:41:17.075549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.685 [2024-11-20 12:41:17.075560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.075567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.075578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.075584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.075595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.075601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.075613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.075619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.686 [2024-11-20 12:41:17.076717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.686 [2024-11-20 12:41:17.076728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.687 [2024-11-20 12:41:17.076734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.687 [2024-11-20 12:41:17.077015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.687 [2024-11-20 12:41:17.077034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.687 [2024-11-20 12:41:17.077051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.687 [2024-11-20 12:41:17.077635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.687 [2024-11-20 12:41:17.077646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.688 [2024-11-20 12:41:17.077881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.077899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.077916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.077934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.077952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.077970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.077999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.688 [2024-11-20 12:41:17.078578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.688 [2024-11-20 12:41:17.078584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.078986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.078997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.689 [2024-11-20 12:41:17.079152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.689 [2024-11-20 12:41:17.079159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.079496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.079990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.690 [2024-11-20 12:41:17.080164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.690 [2024-11-20 12:41:17.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.690 [2024-11-20 12:41:17.080345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.080827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.080833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.691 [2024-11-20 12:41:17.084237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.691 [2024-11-20 12:41:17.084253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.691 [2024-11-20 12:41:17.084264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.084988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.084994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.692 [2024-11-20 12:41:17.085172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:26.692 [2024-11-20 12:41:17.085186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:26.693 [2024-11-20 12:41:17.085913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.693 [2024-11-20 12:41:17.085920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:26.693 11734.38 IOPS, 45.84 MiB/s [2024-11-20T11:41:32.457Z] 10896.21 IOPS, 42.56 MiB/s [2024-11-20T11:41:32.457Z] 10169.80 IOPS, 39.73 MiB/s [2024-11-20T11:41:32.457Z] 10175.75 IOPS, 39.75 MiB/s [2024-11-20T11:41:32.457Z] 10320.94 IOPS, 40.32 MiB/s [2024-11-20T11:41:32.457Z] 10508.50 IOPS, 41.05 MiB/s [2024-11-20T11:41:32.457Z] 10718.37 IOPS, 41.87 MiB/s [2024-11-20T11:41:32.457Z] 10878.05 IOPS, 42.49 MiB/s [2024-11-20T11:41:32.457Z] 10956.62 IOPS, 42.80 MiB/s [2024-11-20T11:41:32.457Z] 11032.36 IOPS, 43.10 MiB/s [2024-11-20T11:41:32.457Z] 11131.65 IOPS, 43.48 MiB/s [2024-11-20T11:41:32.457Z] 11266.58 IOPS, 44.01 MiB/s [2024-11-20T11:41:32.458Z] [2024-11-20 12:41:29.918893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.918928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.918959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.918966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.918978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.918985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.918996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.919208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.919214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.920861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.920878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.920895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.920912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.920981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.920992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.921000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.921018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.921035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.921052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.921069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.921087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.921104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.694 [2024-11-20 12:41:29.921121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.694 [2024-11-20 12:41:29.921321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:26.694 [2024-11-20 12:41:29.921334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.695 [2024-11-20 12:41:29.921358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.695 [2024-11-20 12:41:29.921375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.695 [2024-11-20 12:41:29.921395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.695 [2024-11-20 12:41:29.921419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.695 [2024-11-20 12:41:29.921436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.695 [2024-11-20 12:41:29.921454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:26.695 [2024-11-20 12:41:29.921552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.695 [2024-11-20 12:41:29.921558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:26.695 11389.84 IOPS, 44.49 MiB/s [2024-11-20T11:41:32.459Z] 11432.96 IOPS, 44.66 MiB/s [2024-11-20T11:41:32.459Z] 11475.44 IOPS, 44.83 MiB/s [2024-11-20T11:41:32.459Z] Received shutdown signal, test time was about 27.189044 seconds 00:27:26.695 00:27:26.695 Latency(us) 00:27:26.695 [2024-11-20T11:41:32.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.695 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:26.695 Verification LBA range: start 0x0 length 0x4000 00:27:26.695 Nvme0n1 : 27.19 11483.22 44.86 0.00 0.00 11126.86 1087.30 3080906.94 00:27:26.695 [2024-11-20T11:41:32.459Z] =================================================================================================================== 00:27:26.695 [2024-11-20T11:41:32.459Z] Total : 11483.22 44.86 0.00 0.00 11126.86 1087.30 3080906.94 00:27:26.695 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.954 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.954 rmmod nvme_tcp 00:27:26.954 rmmod nvme_fabrics 00:27:26.954 rmmod nvme_keyring 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1051675 ']' 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1051675 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1051675 ']' 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1051675 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051675 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051675' 00:27:26.955 killing process with pid 1051675 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1051675 00:27:26.955 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1051675 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.214 12:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.121 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:29.121 00:27:29.121 real 0m39.766s 00:27:29.121 user 1m45.722s 00:27:29.121 sys 0m10.817s 00:27:29.121 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.121 12:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:29.121 ************************************ 00:27:29.121 END TEST nvmf_host_multipath_status 00:27:29.121 ************************************ 00:27:29.381 12:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:29.381 12:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:29.381 12:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.381 12:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.381 ************************************ 00:27:29.381 START TEST nvmf_discovery_remove_ifc 00:27:29.381 ************************************ 00:27:29.381 12:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:29.381 * Looking for test storage... 00:27:29.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:29.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.381 --rc genhtml_branch_coverage=1 00:27:29.381 --rc genhtml_function_coverage=1 00:27:29.381 --rc genhtml_legend=1 00:27:29.381 --rc geninfo_all_blocks=1 00:27:29.381 --rc geninfo_unexecuted_blocks=1 00:27:29.381 00:27:29.381 ' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:29.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.381 --rc genhtml_branch_coverage=1 00:27:29.381 --rc genhtml_function_coverage=1 00:27:29.381 --rc genhtml_legend=1 00:27:29.381 --rc geninfo_all_blocks=1 00:27:29.381 --rc geninfo_unexecuted_blocks=1 00:27:29.381 00:27:29.381 ' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:29.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.381 --rc genhtml_branch_coverage=1 00:27:29.381 --rc genhtml_function_coverage=1 00:27:29.381 --rc genhtml_legend=1 00:27:29.381 --rc geninfo_all_blocks=1 00:27:29.381 --rc geninfo_unexecuted_blocks=1 00:27:29.381 00:27:29.381 ' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:29.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.381 --rc genhtml_branch_coverage=1 00:27:29.381 --rc genhtml_function_coverage=1 00:27:29.381 --rc genhtml_legend=1 00:27:29.381 --rc geninfo_all_blocks=1 00:27:29.381 --rc geninfo_unexecuted_blocks=1 00:27:29.381 00:27:29.381 ' 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.381 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:29.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.641 12:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:36.213 12:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:27:36.213 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:27:36.213 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:27:36.213 Found net devices under 0000:1a:00.0: cvl_0_0 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:27:36.213 Found net devices under 0000:1a:00.1: cvl_0_1 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.213 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:36.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:27:36.214 00:27:36.214 --- 10.0.0.2 ping statistics --- 00:27:36.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.214 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:27:36.214 00:27:36.214 --- 10.0.0.1 ping statistics --- 00:27:36.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.214 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1060926 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1060926 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1060926 ']' 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.214 12:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.214 [2024-11-20 12:41:41.372479] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:27:36.214 [2024-11-20 12:41:41.372520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.214 [2024-11-20 12:41:41.447943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.214 [2024-11-20 12:41:41.485656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.214 [2024-11-20 12:41:41.485693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.214 [2024-11-20 12:41:41.485699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.214 [2024-11-20 12:41:41.485705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.214 [2024-11-20 12:41:41.485709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.214 [2024-11-20 12:41:41.486315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.473 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.473 [2024-11-20 12:41:42.233410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.732 [2024-11-20 12:41:42.241577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:36.732 null0 00:27:36.732 [2024-11-20 12:41:42.273559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1061203 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1061203 /tmp/host.sock 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1061203 ']' 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:36.732 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.732 [2024-11-20 12:41:42.341097] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:27:36.732 [2024-11-20 12:41:42.341136] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061203 ] 00:27:36.732 [2024-11-20 12:41:42.413677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.732 [2024-11-20 12:41:42.452918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.732 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.991 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.991 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:36.991 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.991 12:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.923 [2024-11-20 12:41:43.620512] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:37.923 [2024-11-20 12:41:43.620530] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:37.923 [2024-11-20 12:41:43.620541] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:38.181 [2024-11-20 12:41:43.707814] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:38.181 [2024-11-20 12:41:43.769447] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:38.181 [2024-11-20 12:41:43.770226] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22de230:1 started. 00:27:38.181 [2024-11-20 12:41:43.771480] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:38.181 [2024-11-20 12:41:43.771519] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:38.182 [2024-11-20 12:41:43.771536] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:38.182 [2024-11-20 12:41:43.771547] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:38.182 [2024-11-20 12:41:43.771564] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.182 [2024-11-20 12:41:43.778614] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22de230 was disconnected and freed. delete nvme_qpair. 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.182 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.439 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:38.439 12:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.370 12:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.370 12:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.370 12:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:39.370 12:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.304 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.563 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:40.563 12:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:41.498 12:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:42.433 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.433 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.433 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.433 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.434 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.434 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.434 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.434 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.434 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:42.434 12:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.810 [2024-11-20 12:41:49.213149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:43.810 [2024-11-20 12:41:49.213185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.810 [2024-11-20 12:41:49.213195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.810 [2024-11-20 12:41:49.213203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.810 [2024-11-20 12:41:49.213209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.810 [2024-11-20 12:41:49.213216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.810 [2024-11-20 12:41:49.213223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.810 [2024-11-20 12:41:49.213229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.810 [2024-11-20 12:41:49.213235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.810 [2024-11-20 12:41:49.213242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.810 [2024-11-20 12:41:49.213248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.810 [2024-11-20 12:41:49.213254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22baa70 is same with the state(6) to be set 00:27:43.810 [2024-11-20 12:41:49.223172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22baa70 (9): Bad file descriptor 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.810 12:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.810 [2024-11-20 12:41:49.233205] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:43.810 [2024-11-20 12:41:49.233216] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:43.810 [2024-11-20 12:41:49.233220] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:43.810 [2024-11-20 12:41:49.233224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:43.810 [2024-11-20 12:41:49.233242] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.745 [2024-11-20 12:41:50.289480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:44.745 [2024-11-20 12:41:50.289578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22baa70 with addr=10.0.0.2, port=4420 00:27:44.745 [2024-11-20 12:41:50.289613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22baa70 is same with the state(6) to be set 00:27:44.745 [2024-11-20 12:41:50.289678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22baa70 (9): Bad file descriptor 00:27:44.745 [2024-11-20 12:41:50.290659] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:44.745 [2024-11-20 12:41:50.290726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:44.745 [2024-11-20 12:41:50.290750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:44.745 [2024-11-20 12:41:50.290786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:44.745 [2024-11-20 12:41:50.290817] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:44.745 [2024-11-20 12:41:50.290835] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:44.745 [2024-11-20 12:41:50.290848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:44.745 [2024-11-20 12:41:50.290871] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:44.745 [2024-11-20 12:41:50.290886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:44.745 12:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:45.681 [2024-11-20 12:41:51.293405] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:45.681 [2024-11-20 12:41:51.293429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:45.681 [2024-11-20 12:41:51.293440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:45.681 [2024-11-20 12:41:51.293446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:45.681 [2024-11-20 12:41:51.293453] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:45.681 [2024-11-20 12:41:51.293459] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:45.681 [2024-11-20 12:41:51.293464] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:45.681 [2024-11-20 12:41:51.293483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:45.681 [2024-11-20 12:41:51.293502] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:45.681 [2024-11-20 12:41:51.293521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.681 [2024-11-20 12:41:51.293530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.681 [2024-11-20 12:41:51.293539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.681 [2024-11-20 12:41:51.293546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.681 [2024-11-20 12:41:51.293553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.681 [2024-11-20 12:41:51.293558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.681 [2024-11-20 12:41:51.293565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.681 [2024-11-20 12:41:51.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.682 [2024-11-20 12:41:51.293581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.682 [2024-11-20 12:41:51.293588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.682 [2024-11-20 12:41:51.293594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:45.682 [2024-11-20 12:41:51.293927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a9d40 (9): Bad file descriptor 00:27:45.682 [2024-11-20 12:41:51.294937] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:45.682 [2024-11-20 12:41:51.294947] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.682 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:45.952 12:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:46.972 12:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.908 [2024-11-20 12:41:53.344906] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:47.908 [2024-11-20 12:41:53.344922] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:47.908 [2024-11-20 12:41:53.344933] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:47.908 [2024-11-20 12:41:53.472309] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:47.908 12:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.908 [2024-11-20 12:41:53.657268] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:47.908 [2024-11-20 12:41:53.657838] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x22aec60:1 started. 00:27:47.908 [2024-11-20 12:41:53.658823] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:47.908 [2024-11-20 12:41:53.658855] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:47.908 [2024-11-20 12:41:53.658870] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:47.908 [2024-11-20 12:41:53.658882] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:47.908 [2024-11-20 12:41:53.658888] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:47.908 [2024-11-20 12:41:53.663931] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x22aec60 was disconnected and freed. delete nvme_qpair. 00:27:48.844 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.844 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1061203 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1061203 ']' 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1061203 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1061203 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1061203' 00:27:49.103 killing process with pid 1061203 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1061203 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1061203 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.103 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.103 rmmod nvme_tcp 00:27:49.363 rmmod nvme_fabrics 00:27:49.363 rmmod nvme_keyring 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1060926 ']' 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1060926 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1060926 ']' 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1060926 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1060926 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1060926' 00:27:49.363 killing process with pid 1060926 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1060926 00:27:49.363 12:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1060926 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.622 12:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.527 00:27:51.527 real 0m22.253s 00:27:51.527 user 0m27.376s 00:27:51.527 sys 0m6.063s 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.527 ************************************ 00:27:51.527 END TEST nvmf_discovery_remove_ifc 00:27:51.527 ************************************ 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.527 ************************************ 00:27:51.527 START TEST nvmf_identify_kernel_target 00:27:51.527 ************************************ 00:27:51.527 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:51.787 * Looking for test storage... 00:27:51.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:51.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.787 --rc genhtml_branch_coverage=1 00:27:51.787 --rc genhtml_function_coverage=1 00:27:51.787 --rc genhtml_legend=1 00:27:51.787 --rc geninfo_all_blocks=1 00:27:51.787 --rc geninfo_unexecuted_blocks=1 00:27:51.787 00:27:51.787 ' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:51.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.787 --rc genhtml_branch_coverage=1 00:27:51.787 --rc genhtml_function_coverage=1 00:27:51.787 --rc genhtml_legend=1 00:27:51.787 --rc geninfo_all_blocks=1 00:27:51.787 --rc geninfo_unexecuted_blocks=1 00:27:51.787 00:27:51.787 ' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:51.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.787 --rc genhtml_branch_coverage=1 00:27:51.787 --rc genhtml_function_coverage=1 00:27:51.787 --rc genhtml_legend=1 00:27:51.787 --rc geninfo_all_blocks=1 00:27:51.787 --rc geninfo_unexecuted_blocks=1 00:27:51.787 00:27:51.787 ' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:51.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.787 --rc genhtml_branch_coverage=1 00:27:51.787 --rc genhtml_function_coverage=1 00:27:51.787 --rc genhtml_legend=1 00:27:51.787 --rc geninfo_all_blocks=1 00:27:51.787 --rc geninfo_unexecuted_blocks=1 00:27:51.787 00:27:51.787 ' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.787 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:51.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.788 12:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:27:58.359 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:27:58.359 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:27:58.359 Found net devices under 0000:1a:00.0: cvl_0_0 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.359 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:27:58.360 Found net devices under 0000:1a:00.1: cvl_0_1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:27:58.360 00:27:58.360 --- 10.0.0.2 ping statistics --- 00:27:58.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.360 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:27:58.360 00:27:58.360 --- 10.0.0.1 ping statistics --- 00:27:58.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.360 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:58.360 12:42:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:01.649 Waiting for block devices as requested 00:28:01.649 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:28:01.649 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:01.649 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:01.649 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:01.649 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:01.649 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:01.649 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:01.909 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:01.909 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:01.909 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:02.168 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:28:02.168 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:02.168 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:02.426 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:02.426 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:02.426 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:02.691 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:02.691 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:02.691 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:02.951 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:02.951 No valid GPT data, bailing 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:02.951 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:28:03.210 No valid GPT data, bailing 00:28:03.210 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:03.210 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:03.210 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme2n1 ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme2n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme2n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme2n1 pt 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:28:03.211 No valid GPT data, bailing 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme2n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme3n1 ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme3n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme3n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme3n1 pt 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:28:03.211 No valid GPT data, bailing 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme3n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme3n1 ]] 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme3n1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:03.211 00:28:03.211 Discovery Log Number of Records 2, Generation counter 2 00:28:03.211 =====Discovery Log Entry 0====== 00:28:03.211 trtype: tcp 00:28:03.211 adrfam: ipv4 00:28:03.211 subtype: current discovery subsystem 00:28:03.211 treq: not specified, sq flow control disable supported 00:28:03.211 portid: 1 00:28:03.211 trsvcid: 4420 00:28:03.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:03.211 traddr: 10.0.0.1 00:28:03.211 eflags: none 00:28:03.211 sectype: none 00:28:03.211 =====Discovery Log Entry 1====== 00:28:03.211 trtype: tcp 00:28:03.211 adrfam: ipv4 00:28:03.211 subtype: nvme subsystem 00:28:03.211 treq: not specified, sq flow control disable supported 00:28:03.211 portid: 1 00:28:03.211 trsvcid: 4420 00:28:03.211 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:03.211 traddr: 10.0.0.1 00:28:03.211 eflags: none 00:28:03.211 sectype: none 00:28:03.211 12:42:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:03.211 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:03.471 ===================================================== 00:28:03.471 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:03.471 ===================================================== 00:28:03.471 Controller Capabilities/Features 00:28:03.471 ================================ 00:28:03.471 Vendor ID: 0000 00:28:03.471 Subsystem Vendor ID: 0000 00:28:03.471 Serial Number: c15a1496804ae2576100 00:28:03.471 Model Number: Linux 00:28:03.471 Firmware Version: 6.8.9-20 00:28:03.471 Recommended Arb Burst: 0 00:28:03.471 IEEE OUI Identifier: 00 00 00 00:28:03.471 Multi-path I/O 00:28:03.471 May have multiple subsystem ports: No 00:28:03.471 May have multiple controllers: No 00:28:03.471 Associated with SR-IOV VF: No 00:28:03.471 Max Data Transfer Size: Unlimited 00:28:03.471 Max Number of Namespaces: 0 00:28:03.471 Max Number of I/O Queues: 1024 00:28:03.471 NVMe Specification Version (VS): 1.3 00:28:03.471 NVMe Specification Version (Identify): 1.3 00:28:03.471 Maximum Queue Entries: 1024 00:28:03.471 Contiguous Queues Required: No 00:28:03.471 Arbitration Mechanisms Supported 00:28:03.471 Weighted Round Robin: Not Supported 00:28:03.471 Vendor Specific: Not Supported 00:28:03.471 Reset Timeout: 7500 ms 00:28:03.471 Doorbell Stride: 4 bytes 00:28:03.471 NVM Subsystem Reset: Not Supported 00:28:03.471 Command Sets Supported 00:28:03.471 NVM Command Set: Supported 00:28:03.471 Boot Partition: Not Supported 00:28:03.471 Memory Page Size Minimum: 4096 bytes 00:28:03.471 Memory Page Size Maximum: 4096 bytes 00:28:03.471 Persistent Memory Region: Not Supported 00:28:03.471 Optional Asynchronous Events Supported 00:28:03.471 Namespace Attribute Notices: Not Supported 00:28:03.471 Firmware Activation Notices: Not Supported 00:28:03.471 ANA Change Notices: Not Supported 00:28:03.471 PLE Aggregate Log Change Notices: Not Supported 00:28:03.471 LBA Status Info Alert Notices: Not Supported 00:28:03.471 EGE Aggregate Log Change Notices: Not Supported 00:28:03.471 Normal NVM Subsystem Shutdown event: Not Supported 00:28:03.471 Zone Descriptor Change Notices: Not Supported 00:28:03.471 Discovery Log Change Notices: Supported 00:28:03.471 Controller Attributes 00:28:03.471 128-bit Host Identifier: Not Supported 00:28:03.471 Non-Operational Permissive Mode: Not Supported 00:28:03.471 NVM Sets: Not Supported 00:28:03.471 Read Recovery Levels: Not Supported 00:28:03.471 Endurance Groups: Not Supported 00:28:03.471 Predictable Latency Mode: Not Supported 00:28:03.471 Traffic Based Keep ALive: Not Supported 00:28:03.471 Namespace Granularity: Not Supported 00:28:03.471 SQ Associations: Not Supported 00:28:03.471 UUID List: Not Supported 00:28:03.471 Multi-Domain Subsystem: Not Supported 00:28:03.471 Fixed Capacity Management: Not Supported 00:28:03.471 Variable Capacity Management: Not Supported 00:28:03.472 Delete Endurance Group: Not Supported 00:28:03.472 Delete NVM Set: Not Supported 00:28:03.472 Extended LBA Formats Supported: Not Supported 00:28:03.472 Flexible Data Placement Supported: Not Supported 00:28:03.472 00:28:03.472 Controller Memory Buffer Support 00:28:03.472 ================================ 00:28:03.472 Supported: No 00:28:03.472 00:28:03.472 Persistent Memory Region Support 00:28:03.472 ================================ 00:28:03.472 Supported: No 00:28:03.472 00:28:03.472 Admin Command Set Attributes 00:28:03.472 ============================ 00:28:03.472 Security Send/Receive: Not Supported 00:28:03.472 Format NVM: Not Supported 00:28:03.472 Firmware Activate/Download: Not Supported 00:28:03.472 Namespace Management: Not Supported 00:28:03.472 Device Self-Test: Not Supported 00:28:03.472 Directives: Not Supported 00:28:03.472 NVMe-MI: Not Supported 00:28:03.472 Virtualization Management: Not Supported 00:28:03.472 Doorbell Buffer Config: Not Supported 00:28:03.472 Get LBA Status Capability: Not Supported 00:28:03.472 Command & Feature Lockdown Capability: Not Supported 00:28:03.472 Abort Command Limit: 1 00:28:03.472 Async Event Request Limit: 1 00:28:03.472 Number of Firmware Slots: N/A 00:28:03.472 Firmware Slot 1 Read-Only: N/A 00:28:03.472 Firmware Activation Without Reset: N/A 00:28:03.472 Multiple Update Detection Support: N/A 00:28:03.472 Firmware Update Granularity: No Information Provided 00:28:03.472 Per-Namespace SMART Log: No 00:28:03.472 Asymmetric Namespace Access Log Page: Not Supported 00:28:03.472 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:03.472 Command Effects Log Page: Not Supported 00:28:03.472 Get Log Page Extended Data: Supported 00:28:03.472 Telemetry Log Pages: Not Supported 00:28:03.472 Persistent Event Log Pages: Not Supported 00:28:03.472 Supported Log Pages Log Page: May Support 00:28:03.472 Commands Supported & Effects Log Page: Not Supported 00:28:03.472 Feature Identifiers & Effects Log Page:May Support 00:28:03.472 NVMe-MI Commands & Effects Log Page: May Support 00:28:03.472 Data Area 4 for Telemetry Log: Not Supported 00:28:03.472 Error Log Page Entries Supported: 1 00:28:03.472 Keep Alive: Not Supported 00:28:03.472 00:28:03.472 NVM Command Set Attributes 00:28:03.472 ========================== 00:28:03.472 Submission Queue Entry Size 00:28:03.472 Max: 1 00:28:03.472 Min: 1 00:28:03.472 Completion Queue Entry Size 00:28:03.472 Max: 1 00:28:03.472 Min: 1 00:28:03.472 Number of Namespaces: 0 00:28:03.472 Compare Command: Not Supported 00:28:03.472 Write Uncorrectable Command: Not Supported 00:28:03.472 Dataset Management Command: Not Supported 00:28:03.472 Write Zeroes Command: Not Supported 00:28:03.472 Set Features Save Field: Not Supported 00:28:03.472 Reservations: Not Supported 00:28:03.472 Timestamp: Not Supported 00:28:03.472 Copy: Not Supported 00:28:03.472 Volatile Write Cache: Not Present 00:28:03.472 Atomic Write Unit (Normal): 1 00:28:03.472 Atomic Write Unit (PFail): 1 00:28:03.472 Atomic Compare & Write Unit: 1 00:28:03.472 Fused Compare & Write: Not Supported 00:28:03.472 Scatter-Gather List 00:28:03.472 SGL Command Set: Supported 00:28:03.472 SGL Keyed: Not Supported 00:28:03.472 SGL Bit Bucket Descriptor: Not Supported 00:28:03.472 SGL Metadata Pointer: Not Supported 00:28:03.472 Oversized SGL: Not Supported 00:28:03.472 SGL Metadata Address: Not Supported 00:28:03.472 SGL Offset: Supported 00:28:03.472 Transport SGL Data Block: Not Supported 00:28:03.472 Replay Protected Memory Block: Not Supported 00:28:03.472 00:28:03.472 Firmware Slot Information 00:28:03.472 ========================= 00:28:03.472 Active slot: 0 00:28:03.472 00:28:03.472 00:28:03.472 Error Log 00:28:03.472 ========= 00:28:03.472 00:28:03.472 Active Namespaces 00:28:03.472 ================= 00:28:03.472 Discovery Log Page 00:28:03.472 ================== 00:28:03.472 Generation Counter: 2 00:28:03.472 Number of Records: 2 00:28:03.472 Record Format: 0 00:28:03.472 00:28:03.472 Discovery Log Entry 0 00:28:03.472 ---------------------- 00:28:03.472 Transport Type: 3 (TCP) 00:28:03.472 Address Family: 1 (IPv4) 00:28:03.472 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:03.472 Entry Flags: 00:28:03.472 Duplicate Returned Information: 0 00:28:03.472 Explicit Persistent Connection Support for Discovery: 0 00:28:03.472 Transport Requirements: 00:28:03.472 Secure Channel: Not Specified 00:28:03.472 Port ID: 1 (0x0001) 00:28:03.472 Controller ID: 65535 (0xffff) 00:28:03.472 Admin Max SQ Size: 32 00:28:03.472 Transport Service Identifier: 4420 00:28:03.472 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:03.472 Transport Address: 10.0.0.1 00:28:03.472 Discovery Log Entry 1 00:28:03.472 ---------------------- 00:28:03.472 Transport Type: 3 (TCP) 00:28:03.472 Address Family: 1 (IPv4) 00:28:03.472 Subsystem Type: 2 (NVM Subsystem) 00:28:03.472 Entry Flags: 00:28:03.472 Duplicate Returned Information: 0 00:28:03.472 Explicit Persistent Connection Support for Discovery: 0 00:28:03.472 Transport Requirements: 00:28:03.472 Secure Channel: Not Specified 00:28:03.472 Port ID: 1 (0x0001) 00:28:03.472 Controller ID: 65535 (0xffff) 00:28:03.472 Admin Max SQ Size: 32 00:28:03.472 Transport Service Identifier: 4420 00:28:03.472 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:03.472 Transport Address: 10.0.0.1 00:28:03.472 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:03.472 get_feature(0x01) failed 00:28:03.472 get_feature(0x02) failed 00:28:03.472 get_feature(0x04) failed 00:28:03.472 ===================================================== 00:28:03.472 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:03.472 ===================================================== 00:28:03.472 Controller Capabilities/Features 00:28:03.472 ================================ 00:28:03.472 Vendor ID: 0000 00:28:03.472 Subsystem Vendor ID: 0000 00:28:03.472 Serial Number: 1501807bce6f2ab80e32 00:28:03.472 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:03.472 Firmware Version: 6.8.9-20 00:28:03.472 Recommended Arb Burst: 6 00:28:03.472 IEEE OUI Identifier: 00 00 00 00:28:03.472 Multi-path I/O 00:28:03.472 May have multiple subsystem ports: Yes 00:28:03.473 May have multiple controllers: Yes 00:28:03.473 Associated with SR-IOV VF: No 00:28:03.473 Max Data Transfer Size: Unlimited 00:28:03.473 Max Number of Namespaces: 1024 00:28:03.473 Max Number of I/O Queues: 128 00:28:03.473 NVMe Specification Version (VS): 1.3 00:28:03.473 NVMe Specification Version (Identify): 1.3 00:28:03.473 Maximum Queue Entries: 1024 00:28:03.473 Contiguous Queues Required: No 00:28:03.473 Arbitration Mechanisms Supported 00:28:03.473 Weighted Round Robin: Not Supported 00:28:03.473 Vendor Specific: Not Supported 00:28:03.473 Reset Timeout: 7500 ms 00:28:03.473 Doorbell Stride: 4 bytes 00:28:03.473 NVM Subsystem Reset: Not Supported 00:28:03.473 Command Sets Supported 00:28:03.473 NVM Command Set: Supported 00:28:03.473 Boot Partition: Not Supported 00:28:03.473 Memory Page Size Minimum: 4096 bytes 00:28:03.473 Memory Page Size Maximum: 4096 bytes 00:28:03.473 Persistent Memory Region: Not Supported 00:28:03.473 Optional Asynchronous Events Supported 00:28:03.473 Namespace Attribute Notices: Supported 00:28:03.473 Firmware Activation Notices: Not Supported 00:28:03.473 ANA Change Notices: Supported 00:28:03.473 PLE Aggregate Log Change Notices: Not Supported 00:28:03.473 LBA Status Info Alert Notices: Not Supported 00:28:03.473 EGE Aggregate Log Change Notices: Not Supported 00:28:03.473 Normal NVM Subsystem Shutdown event: Not Supported 00:28:03.473 Zone Descriptor Change Notices: Not Supported 00:28:03.473 Discovery Log Change Notices: Not Supported 00:28:03.473 Controller Attributes 00:28:03.473 128-bit Host Identifier: Supported 00:28:03.473 Non-Operational Permissive Mode: Not Supported 00:28:03.473 NVM Sets: Not Supported 00:28:03.473 Read Recovery Levels: Not Supported 00:28:03.473 Endurance Groups: Not Supported 00:28:03.473 Predictable Latency Mode: Not Supported 00:28:03.473 Traffic Based Keep ALive: Supported 00:28:03.473 Namespace Granularity: Not Supported 00:28:03.473 SQ Associations: Not Supported 00:28:03.473 UUID List: Not Supported 00:28:03.473 Multi-Domain Subsystem: Not Supported 00:28:03.473 Fixed Capacity Management: Not Supported 00:28:03.473 Variable Capacity Management: Not Supported 00:28:03.473 Delete Endurance Group: Not Supported 00:28:03.473 Delete NVM Set: Not Supported 00:28:03.473 Extended LBA Formats Supported: Not Supported 00:28:03.473 Flexible Data Placement Supported: Not Supported 00:28:03.473 00:28:03.473 Controller Memory Buffer Support 00:28:03.473 ================================ 00:28:03.473 Supported: No 00:28:03.473 00:28:03.473 Persistent Memory Region Support 00:28:03.473 ================================ 00:28:03.473 Supported: No 00:28:03.473 00:28:03.473 Admin Command Set Attributes 00:28:03.473 ============================ 00:28:03.473 Security Send/Receive: Not Supported 00:28:03.473 Format NVM: Not Supported 00:28:03.473 Firmware Activate/Download: Not Supported 00:28:03.473 Namespace Management: Not Supported 00:28:03.473 Device Self-Test: Not Supported 00:28:03.473 Directives: Not Supported 00:28:03.473 NVMe-MI: Not Supported 00:28:03.473 Virtualization Management: Not Supported 00:28:03.473 Doorbell Buffer Config: Not Supported 00:28:03.473 Get LBA Status Capability: Not Supported 00:28:03.473 Command & Feature Lockdown Capability: Not Supported 00:28:03.473 Abort Command Limit: 4 00:28:03.473 Async Event Request Limit: 4 00:28:03.473 Number of Firmware Slots: N/A 00:28:03.473 Firmware Slot 1 Read-Only: N/A 00:28:03.473 Firmware Activation Without Reset: N/A 00:28:03.473 Multiple Update Detection Support: N/A 00:28:03.473 Firmware Update Granularity: No Information Provided 00:28:03.473 Per-Namespace SMART Log: Yes 00:28:03.473 Asymmetric Namespace Access Log Page: Supported 00:28:03.473 ANA Transition Time : 10 sec 00:28:03.473 00:28:03.473 Asymmetric Namespace Access Capabilities 00:28:03.473 ANA Optimized State : Supported 00:28:03.473 ANA Non-Optimized State : Supported 00:28:03.473 ANA Inaccessible State : Supported 00:28:03.473 ANA Persistent Loss State : Supported 00:28:03.473 ANA Change State : Supported 00:28:03.473 ANAGRPID is not changed : No 00:28:03.473 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:03.473 00:28:03.473 ANA Group Identifier Maximum : 128 00:28:03.473 Number of ANA Group Identifiers : 128 00:28:03.473 Max Number of Allowed Namespaces : 1024 00:28:03.473 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:03.473 Command Effects Log Page: Supported 00:28:03.473 Get Log Page Extended Data: Supported 00:28:03.473 Telemetry Log Pages: Not Supported 00:28:03.473 Persistent Event Log Pages: Not Supported 00:28:03.473 Supported Log Pages Log Page: May Support 00:28:03.473 Commands Supported & Effects Log Page: Not Supported 00:28:03.473 Feature Identifiers & Effects Log Page:May Support 00:28:03.473 NVMe-MI Commands & Effects Log Page: May Support 00:28:03.473 Data Area 4 for Telemetry Log: Not Supported 00:28:03.473 Error Log Page Entries Supported: 128 00:28:03.473 Keep Alive: Supported 00:28:03.473 Keep Alive Granularity: 1000 ms 00:28:03.473 00:28:03.473 NVM Command Set Attributes 00:28:03.473 ========================== 00:28:03.473 Submission Queue Entry Size 00:28:03.473 Max: 64 00:28:03.473 Min: 64 00:28:03.473 Completion Queue Entry Size 00:28:03.473 Max: 16 00:28:03.473 Min: 16 00:28:03.473 Number of Namespaces: 1024 00:28:03.473 Compare Command: Not Supported 00:28:03.473 Write Uncorrectable Command: Not Supported 00:28:03.473 Dataset Management Command: Supported 00:28:03.473 Write Zeroes Command: Supported 00:28:03.473 Set Features Save Field: Not Supported 00:28:03.473 Reservations: Not Supported 00:28:03.473 Timestamp: Not Supported 00:28:03.473 Copy: Not Supported 00:28:03.473 Volatile Write Cache: Present 00:28:03.473 Atomic Write Unit (Normal): 1 00:28:03.473 Atomic Write Unit (PFail): 1 00:28:03.473 Atomic Compare & Write Unit: 1 00:28:03.473 Fused Compare & Write: Not Supported 00:28:03.473 Scatter-Gather List 00:28:03.473 SGL Command Set: Supported 00:28:03.473 SGL Keyed: Not Supported 00:28:03.473 SGL Bit Bucket Descriptor: Not Supported 00:28:03.473 SGL Metadata Pointer: Not Supported 00:28:03.473 Oversized SGL: Not Supported 00:28:03.473 SGL Metadata Address: Not Supported 00:28:03.473 SGL Offset: Supported 00:28:03.473 Transport SGL Data Block: Not Supported 00:28:03.473 Replay Protected Memory Block: Not Supported 00:28:03.473 00:28:03.473 Firmware Slot Information 00:28:03.473 ========================= 00:28:03.473 Active slot: 0 00:28:03.473 00:28:03.473 Asymmetric Namespace Access 00:28:03.473 =========================== 00:28:03.473 Change Count : 0 00:28:03.473 Number of ANA Group Descriptors : 1 00:28:03.473 ANA Group Descriptor : 0 00:28:03.473 ANA Group ID : 1 00:28:03.473 Number of NSID Values : 1 00:28:03.473 Change Count : 0 00:28:03.473 ANA State : 1 00:28:03.473 Namespace Identifier : 1 00:28:03.473 00:28:03.473 Commands Supported and Effects 00:28:03.473 ============================== 00:28:03.473 Admin Commands 00:28:03.473 -------------- 00:28:03.473 Get Log Page (02h): Supported 00:28:03.473 Identify (06h): Supported 00:28:03.473 Abort (08h): Supported 00:28:03.473 Set Features (09h): Supported 00:28:03.473 Get Features (0Ah): Supported 00:28:03.473 Asynchronous Event Request (0Ch): Supported 00:28:03.473 Keep Alive (18h): Supported 00:28:03.473 I/O Commands 00:28:03.473 ------------ 00:28:03.473 Flush (00h): Supported 00:28:03.473 Write (01h): Supported LBA-Change 00:28:03.473 Read (02h): Supported 00:28:03.473 Write Zeroes (08h): Supported LBA-Change 00:28:03.473 Dataset Management (09h): Supported 00:28:03.473 00:28:03.473 Error Log 00:28:03.473 ========= 00:28:03.473 Entry: 0 00:28:03.473 Error Count: 0x3 00:28:03.473 Submission Queue Id: 0x0 00:28:03.473 Command Id: 0x5 00:28:03.473 Phase Bit: 0 00:28:03.473 Status Code: 0x2 00:28:03.473 Status Code Type: 0x0 00:28:03.473 Do Not Retry: 1 00:28:03.473 Error Location: 0x28 00:28:03.473 LBA: 0x0 00:28:03.473 Namespace: 0x0 00:28:03.473 Vendor Log Page: 0x0 00:28:03.473 ----------- 00:28:03.473 Entry: 1 00:28:03.473 Error Count: 0x2 00:28:03.473 Submission Queue Id: 0x0 00:28:03.473 Command Id: 0x5 00:28:03.473 Phase Bit: 0 00:28:03.474 Status Code: 0x2 00:28:03.474 Status Code Type: 0x0 00:28:03.474 Do Not Retry: 1 00:28:03.474 Error Location: 0x28 00:28:03.474 LBA: 0x0 00:28:03.474 Namespace: 0x0 00:28:03.474 Vendor Log Page: 0x0 00:28:03.474 ----------- 00:28:03.474 Entry: 2 00:28:03.474 Error Count: 0x1 00:28:03.474 Submission Queue Id: 0x0 00:28:03.474 Command Id: 0x4 00:28:03.474 Phase Bit: 0 00:28:03.474 Status Code: 0x2 00:28:03.474 Status Code Type: 0x0 00:28:03.474 Do Not Retry: 1 00:28:03.474 Error Location: 0x28 00:28:03.474 LBA: 0x0 00:28:03.474 Namespace: 0x0 00:28:03.474 Vendor Log Page: 0x0 00:28:03.474 00:28:03.474 Number of Queues 00:28:03.474 ================ 00:28:03.474 Number of I/O Submission Queues: 128 00:28:03.474 Number of I/O Completion Queues: 128 00:28:03.474 00:28:03.474 ZNS Specific Controller Data 00:28:03.474 ============================ 00:28:03.474 Zone Append Size Limit: 0 00:28:03.474 00:28:03.474 00:28:03.474 Active Namespaces 00:28:03.474 ================= 00:28:03.474 get_feature(0x05) failed 00:28:03.474 Namespace ID:1 00:28:03.474 Command Set Identifier: NVM (00h) 00:28:03.474 Deallocate: Supported 00:28:03.474 Deallocated/Unwritten Error: Not Supported 00:28:03.474 Deallocated Read Value: Unknown 00:28:03.474 Deallocate in Write Zeroes: Not Supported 00:28:03.474 Deallocated Guard Field: 0xFFFF 00:28:03.474 Flush: Supported 00:28:03.474 Reservation: Not Supported 00:28:03.474 Namespace Sharing Capabilities: Multiple Controllers 00:28:03.474 Size (in LBAs): 1953525168 (931GiB) 00:28:03.474 Capacity (in LBAs): 1953525168 (931GiB) 00:28:03.474 Utilization (in LBAs): 1953525168 (931GiB) 00:28:03.474 UUID: d29e92d8-b1b9-4fd0-b0b2-2a7e83770b0b 00:28:03.474 Thin Provisioning: Not Supported 00:28:03.474 Per-NS Atomic Units: Yes 00:28:03.474 Atomic Boundary Size (Normal): 0 00:28:03.474 Atomic Boundary Size (PFail): 0 00:28:03.474 Atomic Boundary Offset: 0 00:28:03.474 NGUID/EUI64 Never Reused: No 00:28:03.474 ANA group ID: 1 00:28:03.474 Namespace Write Protected: No 00:28:03.474 Number of LBA Formats: 1 00:28:03.474 Current LBA Format: LBA Format #00 00:28:03.474 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:03.474 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.474 rmmod nvme_tcp 00:28:03.474 rmmod nvme_fabrics 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.474 12:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.008 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.008 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:06.008 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:06.008 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:06.008 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.008 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:06.009 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:06.009 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.009 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:06.009 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:06.009 12:42:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:09.296 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:09.296 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:10.233 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:11.170 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:28:11.170 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:11.170 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:28:11.170 00:28:11.170 real 0m19.584s 00:28:11.170 user 0m4.804s 00:28:11.170 sys 0m9.936s 00:28:11.170 12:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.170 12:42:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.170 ************************************ 00:28:11.170 END TEST nvmf_identify_kernel_target 00:28:11.170 ************************************ 00:28:11.170 12:42:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:11.170 12:42:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:11.170 12:42:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.170 12:42:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.430 ************************************ 00:28:11.430 START TEST nvmf_auth_host 00:28:11.430 ************************************ 00:28:11.430 12:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:11.430 * Looking for test storage... 00:28:11.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:11.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.430 --rc genhtml_branch_coverage=1 00:28:11.430 --rc genhtml_function_coverage=1 00:28:11.430 --rc genhtml_legend=1 00:28:11.430 --rc geninfo_all_blocks=1 00:28:11.430 --rc geninfo_unexecuted_blocks=1 00:28:11.430 00:28:11.430 ' 00:28:11.430 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:11.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.430 --rc genhtml_branch_coverage=1 00:28:11.430 --rc genhtml_function_coverage=1 00:28:11.431 --rc genhtml_legend=1 00:28:11.431 --rc geninfo_all_blocks=1 00:28:11.431 --rc geninfo_unexecuted_blocks=1 00:28:11.431 00:28:11.431 ' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:11.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.431 --rc genhtml_branch_coverage=1 00:28:11.431 --rc genhtml_function_coverage=1 00:28:11.431 --rc genhtml_legend=1 00:28:11.431 --rc geninfo_all_blocks=1 00:28:11.431 --rc geninfo_unexecuted_blocks=1 00:28:11.431 00:28:11.431 ' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:11.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.431 --rc genhtml_branch_coverage=1 00:28:11.431 --rc genhtml_function_coverage=1 00:28:11.431 --rc genhtml_legend=1 00:28:11.431 --rc geninfo_all_blocks=1 00:28:11.431 --rc geninfo_unexecuted_blocks=1 00:28:11.431 00:28:11.431 ' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.431 12:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.998 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:28:17.999 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:28:17.999 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:28:17.999 Found net devices under 0000:1a:00.0: cvl_0_0 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:28:17.999 Found net devices under 0000:1a:00.1: cvl_0_1 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.999 12:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.999 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.999 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.999 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.999 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.999 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:28:18.000 00:28:18.000 --- 10.0.0.2 ping statistics --- 00:28:18.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.000 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:28:18.000 00:28:18.000 --- 10.0.0.1 ping statistics --- 00:28:18.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.000 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1074932 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1074932 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1074932 ']' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.000 12:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=18823b58c0f1f86c4aa7b104485ba60f 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yCK 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 18823b58c0f1f86c4aa7b104485ba60f 0 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 18823b58c0f1f86c4aa7b104485ba60f 0 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=18823b58c0f1f86c4aa7b104485ba60f 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yCK 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yCK 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yCK 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=57f2577093e0aa2b373bc451a5adcb63d66d5c9118b7793c45cb915bc69d97fd 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oZI 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 57f2577093e0aa2b373bc451a5adcb63d66d5c9118b7793c45cb915bc69d97fd 3 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 57f2577093e0aa2b373bc451a5adcb63d66d5c9118b7793c45cb915bc69d97fd 3 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.567 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=57f2577093e0aa2b373bc451a5adcb63d66d5c9118b7793c45cb915bc69d97fd 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oZI 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oZI 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oZI 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=07a00e7938ec3ae0300a27019bfbc5830683f7d6d85ff6c0 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3Xu 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 07a00e7938ec3ae0300a27019bfbc5830683f7d6d85ff6c0 0 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 07a00e7938ec3ae0300a27019bfbc5830683f7d6d85ff6c0 0 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=07a00e7938ec3ae0300a27019bfbc5830683f7d6d85ff6c0 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:18.568 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.826 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3Xu 00:28:18.826 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3Xu 00:28:18.826 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3Xu 00:28:18.826 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:18.826 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9172fd2162945600b5e70f360cad1a5c2ab8b581a0965ad7 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.alS 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9172fd2162945600b5e70f360cad1a5c2ab8b581a0965ad7 2 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9172fd2162945600b5e70f360cad1a5c2ab8b581a0965ad7 2 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9172fd2162945600b5e70f360cad1a5c2ab8b581a0965ad7 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.alS 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.alS 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.alS 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d347cccc5e3396ad4c49ec7d58abda0 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GAc 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d347cccc5e3396ad4c49ec7d58abda0 1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d347cccc5e3396ad4c49ec7d58abda0 1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d347cccc5e3396ad4c49ec7d58abda0 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GAc 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GAc 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GAc 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3a035b8b2b24f376a8ca874ed6bdadf4 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2mD 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3a035b8b2b24f376a8ca874ed6bdadf4 1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3a035b8b2b24f376a8ca874ed6bdadf4 1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3a035b8b2b24f376a8ca874ed6bdadf4 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2mD 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2mD 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2mD 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=14b823cc7efaa1ea408a17d04b4c9fe37ece64ff8434ed6f 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qJs 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 14b823cc7efaa1ea408a17d04b4c9fe37ece64ff8434ed6f 2 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 14b823cc7efaa1ea408a17d04b4c9fe37ece64ff8434ed6f 2 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=14b823cc7efaa1ea408a17d04b4c9fe37ece64ff8434ed6f 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qJs 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qJs 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qJs 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=afd62e67c43f71bdc686f059a1743451 00:28:18.827 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FkD 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key afd62e67c43f71bdc686f059a1743451 0 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 afd62e67c43f71bdc686f059a1743451 0 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=afd62e67c43f71bdc686f059a1743451 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FkD 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FkD 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FkD 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0090ae25108b733d39f0b0b2116a16c7d35665cd5ed7ffcfe5532671f0cdf6dc 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mz1 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0090ae25108b733d39f0b0b2116a16c7d35665cd5ed7ffcfe5532671f0cdf6dc 3 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0090ae25108b733d39f0b0b2116a16c7d35665cd5ed7ffcfe5532671f0cdf6dc 3 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:19.086 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0090ae25108b733d39f0b0b2116a16c7d35665cd5ed7ffcfe5532671f0cdf6dc 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mz1 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mz1 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mz1 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1074932 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1074932 ']' 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.087 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yCK 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oZI ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oZI 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3Xu 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.alS ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.alS 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GAc 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2mD ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2mD 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qJs 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FkD ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FkD 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mz1 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:19.346 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:19.347 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:19.347 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:19.347 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:19.347 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:19.347 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:19.347 12:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:22.636 Waiting for block devices as requested 00:28:22.636 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:28:22.636 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:22.636 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:22.895 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:22.895 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:22.895 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:22.895 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:23.154 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:23.154 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:23.154 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:23.154 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:28:23.413 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:23.413 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:23.413 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:23.671 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:23.671 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:23.671 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:23.929 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:23.929 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:23.929 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:24.866 No valid GPT data, bailing 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:28:24.866 No valid GPT data, bailing 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme2n1 ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme2n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme2n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme2n1 pt 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:28:24.866 No valid GPT data, bailing 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme2n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme3n1 ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme3n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme3n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme3n1 pt 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:28:24.866 No valid GPT data, bailing 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme3n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme3n1 ]] 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme3n1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:24.866 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:24.867 00:28:24.867 Discovery Log Number of Records 2, Generation counter 2 00:28:24.867 =====Discovery Log Entry 0====== 00:28:24.867 trtype: tcp 00:28:24.867 adrfam: ipv4 00:28:24.867 subtype: current discovery subsystem 00:28:24.867 treq: not specified, sq flow control disable supported 00:28:24.867 portid: 1 00:28:24.867 trsvcid: 4420 00:28:24.867 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:24.867 traddr: 10.0.0.1 00:28:24.867 eflags: none 00:28:24.867 sectype: none 00:28:24.867 =====Discovery Log Entry 1====== 00:28:24.867 trtype: tcp 00:28:24.867 adrfam: ipv4 00:28:24.867 subtype: nvme subsystem 00:28:24.867 treq: not specified, sq flow control disable supported 00:28:24.867 portid: 1 00:28:24.867 trsvcid: 4420 00:28:24.867 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:24.867 traddr: 10.0.0.1 00:28:24.867 eflags: none 00:28:24.867 sectype: none 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.867 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.128 nvme0n1 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.128 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.388 nvme0n1 00:28:25.388 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.388 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.388 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.388 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.388 12:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.388 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 nvme0n1 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.648 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.907 nvme0n1 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.907 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.908 nvme0n1 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.908 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.167 nvme0n1 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.167 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:26.426 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.427 12:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.427 nvme0n1 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.427 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 nvme0n1 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.686 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.949 nvme0n1 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.949 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.208 nvme0n1 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.208 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.209 12:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.468 nvme0n1 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.468 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.469 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.469 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.469 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.727 nvme0n1 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.727 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.986 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.245 nvme0n1 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.245 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.246 12:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.505 nvme0n1 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.505 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.506 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.764 nvme0n1 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.764 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.765 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.024 nvme0n1 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.024 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.025 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.284 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.284 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.284 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.285 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.285 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.285 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.285 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.285 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.285 12:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 nvme0n1 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.544 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.545 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.545 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.545 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.112 nvme0n1 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.113 12:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.373 nvme0n1 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.373 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.374 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.374 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.374 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.941 nvme0n1 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.941 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.200 nvme0n1 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.200 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.201 12:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.768 nvme0n1 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.768 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.027 12:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.596 nvme0n1 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.596 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.164 nvme0n1 00:28:33.164 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.165 12:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.733 nvme0n1 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.733 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.302 nvme0n1 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.302 12:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.561 nvme0n1 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:34.561 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.562 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.821 nvme0n1 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.821 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.080 nvme0n1 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.080 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 nvme0n1 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.340 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.341 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 12:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 nvme0n1 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.341 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.600 nvme0n1 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.600 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.601 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.907 nvme0n1 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.907 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.908 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.177 nvme0n1 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.177 12:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 nvme0n1 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.444 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.445 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 nvme0n1 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:36.703 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.704 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.963 nvme0n1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.963 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.222 nvme0n1 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.222 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.481 12:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.481 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.740 nvme0n1 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.740 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.999 nvme0n1 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.999 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.000 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.000 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.000 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.000 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.000 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.259 nvme0n1 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.259 12:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.828 nvme0n1 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.828 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.829 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.088 nvme0n1 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.088 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.089 12:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.657 nvme0n1 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.657 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.916 nvme0n1 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.916 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:39.917 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.917 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.917 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.176 12:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 nvme0n1 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.435 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.003 nvme0n1 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.003 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.004 12:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.571 nvme0n1 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.571 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.831 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.400 nvme0n1 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.400 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.401 12:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.969 nvme0n1 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.969 12:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.538 nvme0n1 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.538 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.797 nvme0n1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.797 nvme0n1 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.797 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.056 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.057 nvme0n1 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.057 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:44.315 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.316 nvme0n1 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.316 12:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.316 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.575 nvme0n1 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.575 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.576 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.835 nvme0n1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.835 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.836 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.094 nvme0n1 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.094 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.095 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.095 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.095 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 nvme0n1 00:28:45.353 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.353 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.353 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.353 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.353 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 12:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.353 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.353 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.353 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.353 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.353 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.354 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 nvme0n1 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.613 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 nvme0n1 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.872 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.131 nvme0n1 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:46.131 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.132 12:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.390 nvme0n1 00:28:46.390 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.390 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.390 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.390 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.390 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.391 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.391 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.391 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.391 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.391 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.649 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.909 nvme0n1 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.909 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.168 nvme0n1 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.168 12:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.426 nvme0n1 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.426 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.427 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.994 nvme0n1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.994 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.254 nvme0n1 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.254 12:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.254 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.822 nvme0n1 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.822 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.823 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.823 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.823 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.823 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.823 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.823 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.081 nvme0n1 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.081 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.340 12:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.600 nvme0n1 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4MjNiNThjMGYxZjg2YzRhYTdiMTA0NDg1YmE2MGbzWwLi: 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTdmMjU3NzA5M2UwYWEyYjM3M2JjNDUxYTVhZGNiNjNkNjZkNWM5MTE4Yjc3OTNjNDVjYjkxNWJjNjlkOTdmZBJOf88=: 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.600 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.168 nvme0n1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.168 12:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.736 nvme0n1 00:28:50.736 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.736 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.736 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.736 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.736 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.736 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.995 12:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 nvme0n1 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTRiODIzY2M3ZWZhYTFlYTQwOGExN2QwNGI0YzlmZTM3ZWNlNjRmZjg0MzRlZDZmrWpYZQ==: 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: ]] 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkNjJlNjdjNDNmNzFiZGM2ODZmMDU5YTE3NDM0NTFBPk0Z: 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.564 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.565 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.132 nvme0n1 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA5MGFlMjUxMDhiNzMzZDM5ZjBiMGIyMTE2YTE2YzdkMzU2NjVjZDVlZDdmZmNmZTU1MzI2NzFmMGNkZjZkY1cK+w0=: 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.132 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.133 12:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.700 nvme0n1 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:52.700 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.701 request: 00:28:52.701 { 00:28:52.701 "name": "nvme0", 00:28:52.701 "trtype": "tcp", 00:28:52.701 "traddr": "10.0.0.1", 00:28:52.701 "adrfam": "ipv4", 00:28:52.701 "trsvcid": "4420", 00:28:52.701 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:52.701 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:52.701 "prchk_reftag": false, 00:28:52.701 "prchk_guard": false, 00:28:52.701 "hdgst": false, 00:28:52.701 "ddgst": false, 00:28:52.701 "allow_unrecognized_csi": false, 00:28:52.701 "method": "bdev_nvme_attach_controller", 00:28:52.701 "req_id": 1 00:28:52.701 } 00:28:52.701 Got JSON-RPC error response 00:28:52.701 response: 00:28:52.701 { 00:28:52.701 "code": -5, 00:28:52.701 "message": "Input/output error" 00:28:52.701 } 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.701 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.960 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.960 request: 00:28:52.960 { 00:28:52.960 "name": "nvme0", 00:28:52.960 "trtype": "tcp", 00:28:52.960 "traddr": "10.0.0.1", 00:28:52.960 "adrfam": "ipv4", 00:28:52.960 "trsvcid": "4420", 00:28:52.960 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:52.960 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:52.960 "prchk_reftag": false, 00:28:52.960 "prchk_guard": false, 00:28:52.960 "hdgst": false, 00:28:52.960 "ddgst": false, 00:28:52.960 "dhchap_key": "key2", 00:28:52.960 "allow_unrecognized_csi": false, 00:28:52.960 "method": "bdev_nvme_attach_controller", 00:28:52.960 "req_id": 1 00:28:52.960 } 00:28:52.960 Got JSON-RPC error response 00:28:52.961 response: 00:28:52.961 { 00:28:52.961 "code": -5, 00:28:52.961 "message": "Input/output error" 00:28:52.961 } 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.961 request: 00:28:52.961 { 00:28:52.961 "name": "nvme0", 00:28:52.961 "trtype": "tcp", 00:28:52.961 "traddr": "10.0.0.1", 00:28:52.961 "adrfam": "ipv4", 00:28:52.961 "trsvcid": "4420", 00:28:52.961 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:52.961 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:52.961 "prchk_reftag": false, 00:28:52.961 "prchk_guard": false, 00:28:52.961 "hdgst": false, 00:28:52.961 "ddgst": false, 00:28:52.961 "dhchap_key": "key1", 00:28:52.961 "dhchap_ctrlr_key": "ckey2", 00:28:52.961 "allow_unrecognized_csi": false, 00:28:52.961 "method": "bdev_nvme_attach_controller", 00:28:52.961 "req_id": 1 00:28:52.961 } 00:28:52.961 Got JSON-RPC error response 00:28:52.961 response: 00:28:52.961 { 00:28:52.961 "code": -5, 00:28:52.961 "message": "Input/output error" 00:28:52.961 } 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.961 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.220 nvme0n1 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.220 12:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.479 request: 00:28:53.479 { 00:28:53.479 "name": "nvme0", 00:28:53.479 "dhchap_key": "key1", 00:28:53.479 "dhchap_ctrlr_key": "ckey2", 00:28:53.479 "method": "bdev_nvme_set_keys", 00:28:53.479 "req_id": 1 00:28:53.479 } 00:28:53.479 Got JSON-RPC error response 00:28:53.479 response: 00:28:53.479 { 00:28:53.479 "code": -13, 00:28:53.479 "message": "Permission denied" 00:28:53.479 } 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:53.479 12:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:54.415 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.415 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:54.415 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.416 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.416 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.416 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:54.416 12:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdhMDBlNzkzOGVjM2FlMDMwMGEyNzAxOWJmYmM1ODMwNjgzZjdkNmQ4NWZmNmMwP3wgRw==: 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: ]] 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE3MmZkMjE2Mjk0NTYwMGI1ZTcwZjM2MGNhZDFhNWMyYWI4YjU4MWEwOTY1YWQ3rPcJOw==: 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.794 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.795 nvme0n1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQzNDdjY2NjNWUzMzk2YWQ0YzQ5ZWM3ZDU4YWJkYTDlFOY2: 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2EwMzViOGIyYjI0ZjM3NmE4Y2E4NzRlZDZiZGFkZjQwTmdu: 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.795 request: 00:28:55.795 { 00:28:55.795 "name": "nvme0", 00:28:55.795 "dhchap_key": "key2", 00:28:55.795 "dhchap_ctrlr_key": "ckey1", 00:28:55.795 "method": "bdev_nvme_set_keys", 00:28:55.795 "req_id": 1 00:28:55.795 } 00:28:55.795 Got JSON-RPC error response 00:28:55.795 response: 00:28:55.795 { 00:28:55.795 "code": -13, 00:28:55.795 "message": "Permission denied" 00:28:55.795 } 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:55.795 12:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.733 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.733 rmmod nvme_tcp 00:28:56.993 rmmod nvme_fabrics 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1074932 ']' 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1074932 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1074932 ']' 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1074932 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1074932 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1074932' 00:28:56.993 killing process with pid 1074932 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1074932 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1074932 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.993 12:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:59.537 12:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:02.828 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:02.828 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:03.766 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:29:04.704 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:29:04.704 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:04.704 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:29:04.704 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yCK /tmp/spdk.key-null.3Xu /tmp/spdk.key-sha256.GAc /tmp/spdk.key-sha384.qJs /tmp/spdk.key-sha512.mz1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:04.704 12:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:07.995 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:07.995 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:07.995 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:07.995 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:07.995 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:07.995 00:29:07.995 real 0m56.717s 00:29:07.995 user 0m49.931s 00:29:07.995 sys 0m14.140s 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.995 ************************************ 00:29:07.995 END TEST nvmf_auth_host 00:29:07.995 ************************************ 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.995 ************************************ 00:29:07.995 START TEST nvmf_digest 00:29:07.995 ************************************ 00:29:07.995 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:08.255 * Looking for test storage... 00:29:08.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:08.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.255 --rc genhtml_branch_coverage=1 00:29:08.255 --rc genhtml_function_coverage=1 00:29:08.255 --rc genhtml_legend=1 00:29:08.255 --rc geninfo_all_blocks=1 00:29:08.255 --rc geninfo_unexecuted_blocks=1 00:29:08.255 00:29:08.255 ' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:08.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.255 --rc genhtml_branch_coverage=1 00:29:08.255 --rc genhtml_function_coverage=1 00:29:08.255 --rc genhtml_legend=1 00:29:08.255 --rc geninfo_all_blocks=1 00:29:08.255 --rc geninfo_unexecuted_blocks=1 00:29:08.255 00:29:08.255 ' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:08.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.255 --rc genhtml_branch_coverage=1 00:29:08.255 --rc genhtml_function_coverage=1 00:29:08.255 --rc genhtml_legend=1 00:29:08.255 --rc geninfo_all_blocks=1 00:29:08.255 --rc geninfo_unexecuted_blocks=1 00:29:08.255 00:29:08.255 ' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:08.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.255 --rc genhtml_branch_coverage=1 00:29:08.255 --rc genhtml_function_coverage=1 00:29:08.255 --rc genhtml_legend=1 00:29:08.255 --rc geninfo_all_blocks=1 00:29:08.255 --rc geninfo_unexecuted_blocks=1 00:29:08.255 00:29:08.255 ' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.255 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:08.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.256 12:43:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:29:14.827 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:29:14.827 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.827 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:29:14.828 Found net devices under 0000:1a:00.0: cvl_0_0 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:29:14.828 Found net devices under 0000:1a:00.1: cvl_0_1 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.828 12:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:29:14.828 00:29:14.828 --- 10.0.0.2 ping statistics --- 00:29:14.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.828 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:14.828 00:29:14.828 --- 10.0.0.1 ping statistics --- 00:29:14.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.828 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:14.828 ************************************ 00:29:14.828 START TEST nvmf_digest_clean 00:29:14.828 ************************************ 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1090366 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1090366 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1090366 ']' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.828 [2024-11-20 12:43:20.152611] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:14.828 [2024-11-20 12:43:20.152647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.828 [2024-11-20 12:43:20.228581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.828 [2024-11-20 12:43:20.266436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.828 [2024-11-20 12:43:20.266468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.828 [2024-11-20 12:43:20.266475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.828 [2024-11-20 12:43:20.266480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.828 [2024-11-20 12:43:20.266485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.828 [2024-11-20 12:43:20.267056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.828 null0 00:29:14.828 [2024-11-20 12:43:20.412252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.828 [2024-11-20 12:43:20.436457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1090395 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1090395 /var/tmp/bperf.sock 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1090395 ']' 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.828 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.828 [2024-11-20 12:43:20.490096] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:14.828 [2024-11-20 12:43:20.490142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090395 ] 00:29:14.828 [2024-11-20 12:43:20.562952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.087 [2024-11-20 12:43:20.603158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.087 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.087 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:15.087 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:15.087 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:15.087 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:15.346 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.346 12:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.605 nvme0n1 00:29:15.605 12:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:15.605 12:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.864 Running I/O for 2 seconds... 00:29:17.736 27540.00 IOPS, 107.58 MiB/s [2024-11-20T11:43:23.500Z] 27355.50 IOPS, 106.86 MiB/s 00:29:17.736 Latency(us) 00:29:17.736 [2024-11-20T11:43:23.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.736 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:17.736 nvme0n1 : 2.04 26822.25 104.77 0.00 0.00 4672.64 2144.81 46709.29 00:29:17.736 [2024-11-20T11:43:23.500Z] =================================================================================================================== 00:29:17.736 [2024-11-20T11:43:23.500Z] Total : 26822.25 104.77 0.00 0.00 4672.64 2144.81 46709.29 00:29:17.736 { 00:29:17.736 "results": [ 00:29:17.736 { 00:29:17.736 "job": "nvme0n1", 00:29:17.736 "core_mask": "0x2", 00:29:17.736 "workload": "randread", 00:29:17.736 "status": "finished", 00:29:17.736 "queue_depth": 128, 00:29:17.736 "io_size": 4096, 00:29:17.736 "runtime": 2.044534, 00:29:17.736 "iops": 26822.248981919598, 00:29:17.736 "mibps": 104.77441008562343, 00:29:17.736 "io_failed": 0, 00:29:17.736 "io_timeout": 0, 00:29:17.736 "avg_latency_us": 4672.641000482403, 00:29:17.736 "min_latency_us": 2144.8145454545456, 00:29:17.736 "max_latency_us": 46709.29454545455 00:29:17.736 } 00:29:17.736 ], 00:29:17.736 "core_count": 1 00:29:17.736 } 00:29:17.736 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:17.736 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:17.736 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:17.736 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:17.736 | select(.opcode=="crc32c") 00:29:17.736 | "\(.module_name) \(.executed)"' 00:29:17.736 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1090395 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1090395 ']' 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1090395 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1090395 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1090395' 00:29:17.995 killing process with pid 1090395 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1090395 00:29:17.995 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.995 00:29:17.995 Latency(us) 00:29:17.995 [2024-11-20T11:43:23.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.995 [2024-11-20T11:43:23.759Z] =================================================================================================================== 00:29:17.995 [2024-11-20T11:43:23.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.995 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1090395 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1090954 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1090954 /var/tmp/bperf.sock 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1090954 ']' 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.254 12:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:18.254 [2024-11-20 12:43:23.918251] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:18.254 [2024-11-20 12:43:23.918299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090954 ] 00:29:18.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.254 Zero copy mechanism will not be used. 00:29:18.254 [2024-11-20 12:43:23.991756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.514 [2024-11-20 12:43:24.031323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.514 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.514 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:18.514 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:18.514 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:18.514 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:18.773 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.773 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.032 nvme0n1 00:29:19.032 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:19.032 12:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.032 Zero copy mechanism will not be used. 00:29:19.032 Running I/O for 2 seconds... 00:29:21.013 6322.00 IOPS, 790.25 MiB/s [2024-11-20T11:43:26.777Z] 6344.50 IOPS, 793.06 MiB/s 00:29:21.013 Latency(us) 00:29:21.013 [2024-11-20T11:43:26.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:21.013 nvme0n1 : 2.00 6346.60 793.33 0.00 0.00 2518.68 543.65 7208.96 00:29:21.013 [2024-11-20T11:43:26.777Z] =================================================================================================================== 00:29:21.013 [2024-11-20T11:43:26.777Z] Total : 6346.60 793.33 0.00 0.00 2518.68 543.65 7208.96 00:29:21.013 { 00:29:21.013 "results": [ 00:29:21.013 { 00:29:21.013 "job": "nvme0n1", 00:29:21.013 "core_mask": "0x2", 00:29:21.013 "workload": "randread", 00:29:21.013 "status": "finished", 00:29:21.013 "queue_depth": 16, 00:29:21.013 "io_size": 131072, 00:29:21.013 "runtime": 2.001858, 00:29:21.013 "iops": 6346.604004879467, 00:29:21.013 "mibps": 793.3255006099333, 00:29:21.013 "io_failed": 0, 00:29:21.013 "io_timeout": 0, 00:29:21.013 "avg_latency_us": 2518.6825836642697, 00:29:21.013 "min_latency_us": 543.6509090909091, 00:29:21.013 "max_latency_us": 7208.96 00:29:21.013 } 00:29:21.013 ], 00:29:21.013 "core_count": 1 00:29:21.013 } 00:29:21.013 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:21.013 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:21.013 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:21.013 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:21.013 | select(.opcode=="crc32c") 00:29:21.013 | "\(.module_name) \(.executed)"' 00:29:21.013 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1090954 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1090954 ']' 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1090954 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1090954 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.271 12:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.271 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1090954' 00:29:21.271 killing process with pid 1090954 00:29:21.271 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1090954 00:29:21.271 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.271 00:29:21.271 Latency(us) 00:29:21.271 [2024-11-20T11:43:27.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.271 [2024-11-20T11:43:27.035Z] =================================================================================================================== 00:29:21.271 [2024-11-20T11:43:27.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.271 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1090954 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1091490 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1091490 /var/tmp/bperf.sock 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1091490 ']' 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.530 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.530 [2024-11-20 12:43:27.199231] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:21.530 [2024-11-20 12:43:27.199280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091490 ] 00:29:21.530 [2024-11-20 12:43:27.273510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.788 [2024-11-20 12:43:27.311193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.788 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.788 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:21.788 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.788 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.788 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:22.045 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.045 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.303 nvme0n1 00:29:22.303 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:22.303 12:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.303 Running I/O for 2 seconds... 00:29:24.266 30646.00 IOPS, 119.71 MiB/s [2024-11-20T11:43:30.030Z] 30768.00 IOPS, 120.19 MiB/s 00:29:24.266 Latency(us) 00:29:24.266 [2024-11-20T11:43:30.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.266 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.266 nvme0n1 : 2.00 30795.45 120.29 0.00 0.00 4152.53 1660.74 10664.49 00:29:24.266 [2024-11-20T11:43:30.030Z] =================================================================================================================== 00:29:24.266 [2024-11-20T11:43:30.030Z] Total : 30795.45 120.29 0.00 0.00 4152.53 1660.74 10664.49 00:29:24.266 { 00:29:24.266 "results": [ 00:29:24.266 { 00:29:24.266 "job": "nvme0n1", 00:29:24.266 "core_mask": "0x2", 00:29:24.266 "workload": "randwrite", 00:29:24.266 "status": "finished", 00:29:24.266 "queue_depth": 128, 00:29:24.266 "io_size": 4096, 00:29:24.266 "runtime": 2.002374, 00:29:24.266 "iops": 30795.44580582848, 00:29:24.266 "mibps": 120.2947101790175, 00:29:24.266 "io_failed": 0, 00:29:24.266 "io_timeout": 0, 00:29:24.266 "avg_latency_us": 4152.532316837288, 00:29:24.266 "min_latency_us": 1660.7418181818182, 00:29:24.266 "max_latency_us": 10664.494545454545 00:29:24.266 } 00:29:24.266 ], 00:29:24.266 "core_count": 1 00:29:24.266 } 00:29:24.266 12:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.266 12:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.266 12:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.266 12:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.266 | select(.opcode=="crc32c") 00:29:24.266 | "\(.module_name) \(.executed)"' 00:29:24.266 12:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1091490 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1091490 ']' 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1091490 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091490 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091490' 00:29:24.524 killing process with pid 1091490 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1091490 00:29:24.524 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.524 00:29:24.524 Latency(us) 00:29:24.524 [2024-11-20T11:43:30.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.524 [2024-11-20T11:43:30.288Z] =================================================================================================================== 00:29:24.524 [2024-11-20T11:43:30.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.524 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1091490 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1092114 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1092114 /var/tmp/bperf.sock 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1092114 ']' 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.783 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.783 [2024-11-20 12:43:30.443023] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:24.783 [2024-11-20 12:43:30.443072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092114 ] 00:29:24.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.783 Zero copy mechanism will not be used. 00:29:24.783 [2024-11-20 12:43:30.516466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.042 [2024-11-20 12:43:30.556540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.042 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.042 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:25.042 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:25.042 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:25.042 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:25.301 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.301 12:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.301 nvme0n1 00:29:25.301 12:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:25.301 12:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.560 Zero copy mechanism will not be used. 00:29:25.560 Running I/O for 2 seconds... 00:29:27.432 8122.00 IOPS, 1015.25 MiB/s [2024-11-20T11:43:33.196Z] 7357.50 IOPS, 919.69 MiB/s 00:29:27.432 Latency(us) 00:29:27.432 [2024-11-20T11:43:33.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.432 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:27.432 nvme0n1 : 2.00 7352.71 919.09 0.00 0.00 2172.13 1295.83 7149.38 00:29:27.432 [2024-11-20T11:43:33.196Z] =================================================================================================================== 00:29:27.432 [2024-11-20T11:43:33.196Z] Total : 7352.71 919.09 0.00 0.00 2172.13 1295.83 7149.38 00:29:27.432 { 00:29:27.432 "results": [ 00:29:27.432 { 00:29:27.432 "job": "nvme0n1", 00:29:27.432 "core_mask": "0x2", 00:29:27.432 "workload": "randwrite", 00:29:27.432 "status": "finished", 00:29:27.432 "queue_depth": 16, 00:29:27.432 "io_size": 131072, 00:29:27.432 "runtime": 2.00348, 00:29:27.432 "iops": 7352.706291053567, 00:29:27.432 "mibps": 919.0882863816959, 00:29:27.432 "io_failed": 0, 00:29:27.432 "io_timeout": 0, 00:29:27.432 "avg_latency_us": 2172.130733332922, 00:29:27.432 "min_latency_us": 1295.8254545454545, 00:29:27.432 "max_latency_us": 7149.381818181818 00:29:27.432 } 00:29:27.432 ], 00:29:27.432 "core_count": 1 00:29:27.432 } 00:29:27.432 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:27.432 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:27.432 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:27.432 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:27.432 | select(.opcode=="crc32c") 00:29:27.432 | "\(.module_name) \(.executed)"' 00:29:27.432 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1092114 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1092114 ']' 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1092114 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092114 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092114' 00:29:27.691 killing process with pid 1092114 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1092114 00:29:27.691 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.691 00:29:27.691 Latency(us) 00:29:27.691 [2024-11-20T11:43:33.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.691 [2024-11-20T11:43:33.455Z] =================================================================================================================== 00:29:27.691 [2024-11-20T11:43:33.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.691 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1092114 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1090366 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1090366 ']' 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1090366 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1090366 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1090366' 00:29:27.951 killing process with pid 1090366 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1090366 00:29:27.951 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1090366 00:29:28.211 00:29:28.211 real 0m13.666s 00:29:28.211 user 0m26.533s 00:29:28.211 sys 0m3.864s 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.211 ************************************ 00:29:28.211 END TEST nvmf_digest_clean 00:29:28.211 ************************************ 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:28.211 ************************************ 00:29:28.211 START TEST nvmf_digest_error 00:29:28.211 ************************************ 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1092843 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1092843 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1092843 ']' 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.211 12:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.211 [2024-11-20 12:43:33.888372] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:28.211 [2024-11-20 12:43:33.888415] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.211 [2024-11-20 12:43:33.962120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.470 [2024-11-20 12:43:34.000152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.470 [2024-11-20 12:43:34.000184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.470 [2024-11-20 12:43:34.000190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.470 [2024-11-20 12:43:34.000196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.470 [2024-11-20 12:43:34.000200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.470 [2024-11-20 12:43:34.000761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.470 [2024-11-20 12:43:34.065183] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.470 null0 00:29:28.470 [2024-11-20 12:43:34.158833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.470 [2024-11-20 12:43:34.183046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1092868 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1092868 /var/tmp/bperf.sock 00:29:28.470 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:28.471 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1092868 ']' 00:29:28.471 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.471 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.471 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.471 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.471 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.729 [2024-11-20 12:43:34.232732] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:28.729 [2024-11-20 12:43:34.232770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092868 ] 00:29:28.729 [2024-11-20 12:43:34.305057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.729 [2024-11-20 12:43:34.342511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.729 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.729 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:28.729 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.729 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.988 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:28.988 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.988 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.988 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.988 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.988 12:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.556 nvme0n1 00:29:29.556 12:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:29.556 12:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.556 12:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.556 12:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.556 12:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:29.556 12:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.556 Running I/O for 2 seconds... 00:29:29.556 [2024-11-20 12:43:35.184433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.556 [2024-11-20 12:43:35.184466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.556 [2024-11-20 12:43:35.184475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.556 [2024-11-20 12:43:35.195104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.556 [2024-11-20 12:43:35.195130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.556 [2024-11-20 12:43:35.195139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.556 [2024-11-20 12:43:35.202436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.556 [2024-11-20 12:43:35.202458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.556 [2024-11-20 12:43:35.202467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.556 [2024-11-20 12:43:35.211787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.556 [2024-11-20 12:43:35.211810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.556 [2024-11-20 12:43:35.211819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.556 [2024-11-20 12:43:35.221233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.556 [2024-11-20 12:43:35.221255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.556 [2024-11-20 12:43:35.221270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.556 [2024-11-20 12:43:35.232668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.232690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.232697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.244066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.244087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.244095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.253597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.253617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.253625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.262238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.262259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.262267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.270885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.270904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.270912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.280638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.280659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.280666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.288280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.288300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.288308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.299867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.299888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.299896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.557 [2024-11-20 12:43:35.308891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.557 [2024-11-20 12:43:35.308915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.557 [2024-11-20 12:43:35.308923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.317629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.317650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.317658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.326540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.326561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.326569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.334375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.334395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.334402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.345956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.345976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.345984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.356722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.356742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.356750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.367637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.367657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.367665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.375879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.375899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.375907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.387169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.387188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.387196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.395014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.395034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.395042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.404901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.404920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.404928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.412448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.412467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.412475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.422558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.422577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.422585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.431986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.432006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.432014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.440780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.440809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.449959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.449979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.449987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.457472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.457492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.457500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.467863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.467883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.475396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.475420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.475429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.487092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.487111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.487119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.496349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.496368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.496375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.504144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.504164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.504172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.513843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.513863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.513870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.523586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.523606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.523614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.531129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.531149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.531156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.540762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.540782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.540790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.549625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.817 [2024-11-20 12:43:35.549645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.817 [2024-11-20 12:43:35.549653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.817 [2024-11-20 12:43:35.557465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.818 [2024-11-20 12:43:35.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.818 [2024-11-20 12:43:35.557493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.818 [2024-11-20 12:43:35.568054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:29.818 [2024-11-20 12:43:35.568074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.818 [2024-11-20 12:43:35.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.077 [2024-11-20 12:43:35.577816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.077 [2024-11-20 12:43:35.577836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.077 [2024-11-20 12:43:35.577844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.077 [2024-11-20 12:43:35.587694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.077 [2024-11-20 12:43:35.587714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.077 [2024-11-20 12:43:35.587722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.077 [2024-11-20 12:43:35.595619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.077 [2024-11-20 12:43:35.595639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.077 [2024-11-20 12:43:35.595647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.077 [2024-11-20 12:43:35.604947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.077 [2024-11-20 12:43:35.604966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.077 [2024-11-20 12:43:35.604974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.077 [2024-11-20 12:43:35.614301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.077 [2024-11-20 12:43:35.614321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.077 [2024-11-20 12:43:35.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.077 [2024-11-20 12:43:35.622793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.622812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.622823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.630995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.631016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.631024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.641036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.641056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.641064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.648999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.649019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.649026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.658430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.658450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.658458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.667982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.668002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.668009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.677190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.677209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.677217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.685018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.685037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.685045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.694368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.694388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.694396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.704518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.704541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.704550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.711774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.711794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.711802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.722604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.722625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.722633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.730585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.730605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.730613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.740734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.740754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.740762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.752085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.752105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.752113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.763567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.763588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.763595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.775143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.775163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.775171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.784384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.784404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.792171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.792191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.792199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.801066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.801085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.801093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.809767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.809786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.809794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.817817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.817836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.817844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.827257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.827277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.827284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.078 [2024-11-20 12:43:35.836596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.078 [2024-11-20 12:43:35.836615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.078 [2024-11-20 12:43:35.836623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.845190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.845209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.845216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.852935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.852954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.852962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.862283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.862303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.862314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.871388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.871407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.871420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.879748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.879768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.879775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.887678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.887697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.887705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.896439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.896459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.896467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.905398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.905421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.905429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.914927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.914946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.914954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.923721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.923741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.923748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.932352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.940899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.940919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.940926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.950949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.950968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.950977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.961531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.961551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.961559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.971808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.971828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.971836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.980124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.980145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.980153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.989548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.989568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.989576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:35.997790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:35.997810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:35.997818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:36.006767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:36.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:36.006796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:36.014832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:36.014853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:36.014865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:36.025176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:36.025198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:36.025205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:36.036654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:36.036673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:36.036681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:36.049121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:36.049140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.339 [2024-11-20 12:43:36.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.339 [2024-11-20 12:43:36.058278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.339 [2024-11-20 12:43:36.058298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.340 [2024-11-20 12:43:36.058306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.340 [2024-11-20 12:43:36.065706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.340 [2024-11-20 12:43:36.065725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.340 [2024-11-20 12:43:36.065733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.340 [2024-11-20 12:43:36.075206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.340 [2024-11-20 12:43:36.075226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.340 [2024-11-20 12:43:36.075234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.340 [2024-11-20 12:43:36.085226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.340 [2024-11-20 12:43:36.085246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.340 [2024-11-20 12:43:36.085254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.340 [2024-11-20 12:43:36.093003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.340 [2024-11-20 12:43:36.093023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.340 [2024-11-20 12:43:36.093031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.599 [2024-11-20 12:43:36.103563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.599 [2024-11-20 12:43:36.103587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.599 [2024-11-20 12:43:36.103595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.599 [2024-11-20 12:43:36.111138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.599 [2024-11-20 12:43:36.111158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.599 [2024-11-20 12:43:36.111165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.599 [2024-11-20 12:43:36.120418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.599 [2024-11-20 12:43:36.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.599 [2024-11-20 12:43:36.120446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.599 [2024-11-20 12:43:36.129897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.599 [2024-11-20 12:43:36.129917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.599 [2024-11-20 12:43:36.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.599 [2024-11-20 12:43:36.138498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.138518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.138525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.147087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.147107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.147115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.154885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.154905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.154912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.163935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.163954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.163962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.173006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.173025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.173033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 27599.00 IOPS, 107.81 MiB/s [2024-11-20T11:43:36.364Z] [2024-11-20 12:43:36.181917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.181937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.181945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.190749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.190769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.190777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.198855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.198874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.198882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.208033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.208053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.208060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.219527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.219547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.230878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.230899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.230907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.241446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.241466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.241474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.249795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.249815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.249822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.259723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.259744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.259755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.270129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.270149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.270157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.277648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.277667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.277675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.289153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.289175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.289183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.299681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.299702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.299709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.307087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.307108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.307115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.318452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.318472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.318480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.327887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.327907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.327915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.338018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.338038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.338046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.345701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.345722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.345729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.600 [2024-11-20 12:43:36.357312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.600 [2024-11-20 12:43:36.357333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.600 [2024-11-20 12:43:36.357341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.367520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.367549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.376733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.376753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.376760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.383940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.383959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.383967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.393854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.393874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.393882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.403384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.403404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.403417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.412108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.412128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.412136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.423015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.423036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.423049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.431922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.431941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.431949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.439333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.439353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.439361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.450155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.450175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.450183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.457847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.457867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.457875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.468945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.468966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.468974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.860 [2024-11-20 12:43:36.479453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.860 [2024-11-20 12:43:36.479473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.860 [2024-11-20 12:43:36.479482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.487376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.487397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.487405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.498265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.498285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.498293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.506488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.506512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.506520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.517642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.517663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.517670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.525803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.525832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.534973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.534993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.535001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.544433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.544454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.544461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.553130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.553150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.553157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.561968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.561989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.561997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.570484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.570503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.570511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.578066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.578087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.578095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.587569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.587591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.587599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.597886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.597907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.597916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.607417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.607437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.607445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.861 [2024-11-20 12:43:36.615755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:30.861 [2024-11-20 12:43:36.615775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.861 [2024-11-20 12:43:36.615783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.623287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.623307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.623315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.633317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.633339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.633347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.643199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.643219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.643228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.651484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.651503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.651511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.660270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.660291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.660302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.669123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.669143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.669151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.678662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.678682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.678690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.688208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.688228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.688236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.695897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.695917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.695925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.704867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.704888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.704895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.715286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.715306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.121 [2024-11-20 12:43:36.715314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.121 [2024-11-20 12:43:36.723172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.121 [2024-11-20 12:43:36.723193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.723200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.732901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.732922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.732929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.744138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.744158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.744167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.753126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.753147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.753155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.760800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.760821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.760829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.768845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.768866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.768874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.778184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.778204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.778212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.787388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.787409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.787423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.796903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.796922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.796930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.804778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.804798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.804805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.816340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.816371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.825963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.825983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.825991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.833636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.833655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.833663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.844156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.844176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.844184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.853402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.853428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.853436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.860810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.860830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.860837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.871678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.871698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.871707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.122 [2024-11-20 12:43:36.879393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.122 [2024-11-20 12:43:36.879420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.122 [2024-11-20 12:43:36.879428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.889543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.889563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.889571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.898451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.898473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.906148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.906167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.906174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.915077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.915096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.915104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.923659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.923679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.923687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.933435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.933454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.933462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.941871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.941891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.941899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.382 [2024-11-20 12:43:36.950418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.382 [2024-11-20 12:43:36.950438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.382 [2024-11-20 12:43:36.950447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:36.959669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:36.959690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:36.959697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:36.968378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:36.968399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:36.968407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:36.977742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:36.977761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:36.977768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:36.984907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:36.984927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:36.984935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:36.995592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:36.995612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:36.995619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.005988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.006008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.006016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.017564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.017585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.017592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.026845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.026864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.026872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.034631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.034651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.034659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.043257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.043278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.043286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.052995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.053016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.053029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.061639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.061659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.061667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.070892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.070912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.070920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.078670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.078689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.078697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.088866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.088886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.088894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.098654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.098673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.098681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.106095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.106114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.106121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.115238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.115257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.115264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.125530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.125549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.125557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.383 [2024-11-20 12:43:37.135529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.383 [2024-11-20 12:43:37.135549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.383 [2024-11-20 12:43:37.135557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.643 [2024-11-20 12:43:37.143670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.643 [2024-11-20 12:43:37.143690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.643 [2024-11-20 12:43:37.143697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.643 [2024-11-20 12:43:37.155535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.643 [2024-11-20 12:43:37.155555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.643 [2024-11-20 12:43:37.155562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.643 [2024-11-20 12:43:37.162968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.643 [2024-11-20 12:43:37.162988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.643 [2024-11-20 12:43:37.162995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.643 [2024-11-20 12:43:37.173523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d6f30) 00:29:31.643 [2024-11-20 12:43:37.173542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.643 [2024-11-20 12:43:37.173550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.643 27589.00 IOPS, 107.77 MiB/s 00:29:31.643 Latency(us) 00:29:31.643 [2024-11-20T11:43:37.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:31.643 nvme0n1 : 2.00 27608.15 107.84 0.00 0.00 4632.23 2338.44 15490.33 00:29:31.643 [2024-11-20T11:43:37.407Z] =================================================================================================================== 00:29:31.643 [2024-11-20T11:43:37.407Z] Total : 27608.15 107.84 0.00 0.00 4632.23 2338.44 15490.33 00:29:31.643 { 00:29:31.643 "results": [ 00:29:31.643 { 00:29:31.643 "job": "nvme0n1", 00:29:31.643 "core_mask": "0x2", 00:29:31.643 "workload": "randread", 00:29:31.643 "status": "finished", 00:29:31.643 "queue_depth": 128, 00:29:31.643 "io_size": 4096, 00:29:31.643 "runtime": 2.003249, 00:29:31.643 "iops": 27608.15055941623, 00:29:31.643 "mibps": 107.84433812271965, 00:29:31.643 "io_failed": 0, 00:29:31.643 "io_timeout": 0, 00:29:31.643 "avg_latency_us": 4632.228278371901, 00:29:31.643 "min_latency_us": 2338.4436363636364, 00:29:31.643 "max_latency_us": 15490.327272727272 00:29:31.643 } 00:29:31.643 ], 00:29:31.643 "core_count": 1 00:29:31.643 } 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:31.643 | .driver_specific 00:29:31.643 | .nvme_error 00:29:31.643 | .status_code 00:29:31.643 | .command_transient_transport_error' 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1092868 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1092868 ']' 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1092868 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.643 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092868 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092868' 00:29:31.902 killing process with pid 1092868 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1092868 00:29:31.902 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.902 00:29:31.902 Latency(us) 00:29:31.902 [2024-11-20T11:43:37.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.902 [2024-11-20T11:43:37.666Z] =================================================================================================================== 00:29:31.902 [2024-11-20T11:43:37.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1092868 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:31.902 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1093410 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1093410 /var/tmp/bperf.sock 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1093410 ']' 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.903 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.903 [2024-11-20 12:43:37.634457] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:31.903 [2024-11-20 12:43:37.634502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093410 ] 00:29:31.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.903 Zero copy mechanism will not be used. 00:29:32.162 [2024-11-20 12:43:37.708436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.162 [2024-11-20 12:43:37.747418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.162 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.162 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:32.162 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.162 12:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.420 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:32.420 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.420 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.420 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.420 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.420 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.679 nvme0n1 00:29:32.679 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:32.679 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.679 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.679 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.679 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.679 12:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.940 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.940 Zero copy mechanism will not be used. 00:29:32.940 Running I/O for 2 seconds... 00:29:32.940 [2024-11-20 12:43:38.477406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.477445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.477455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.482477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.482504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.482516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.487095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.487117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.487130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.491649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.491672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.491680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.496149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.496172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.496180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.500680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.500702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.500710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.505208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.505229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.505237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.509789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.509811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.509818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.514310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.514331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.514339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.518766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.518788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.518795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.523339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.523360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.523368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.527916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.527942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.527949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.532496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.532518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.532525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.537025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.537047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.537054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.541626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.541646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.541656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.546207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.546228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.546236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.550865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.550887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.550894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.555423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.555443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.555451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.559953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.559976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.559983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.564535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.564556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.564564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.569040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.569062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.569069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.573506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.940 [2024-11-20 12:43:38.573528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.940 [2024-11-20 12:43:38.573535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.940 [2024-11-20 12:43:38.578058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.578080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.578087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.582576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.582597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.582605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.587054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.587076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.587084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.590110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.590130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.590138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.593523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.593544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.593552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.598033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.598054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.598061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.602458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.602479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.602490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.606898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.606919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.606927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.611346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.611367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.611375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.615851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.615872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.615879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.620357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.620378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.620386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.624776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.624796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.624804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.629206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.629227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.633602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.633623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.633630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.638055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.638075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.638083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.642585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.642610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.642618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.647062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.647083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.647091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.651600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.651622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.651629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.656091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.656112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.656120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.660623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.660644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.660652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.665127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.665148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.665155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.669603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.669624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.669632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.674117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.674139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.674146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.678633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.678654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.678665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.683160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.683182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.683189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.687558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.687586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.691988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.941 [2024-11-20 12:43:38.692009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.941 [2024-11-20 12:43:38.692016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.941 [2024-11-20 12:43:38.696443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:32.942 [2024-11-20 12:43:38.696464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.942 [2024-11-20 12:43:38.696471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.700880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.700901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.700908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.705385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.705405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.705417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.709849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.709870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.709878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.714367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.714388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.714396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.718899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.718924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.718932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.724244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.724267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.724274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.729144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.729166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.729174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.734321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.734343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.734351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.739633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.739655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.739663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.744978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.744999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.745007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.750072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.750094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.750102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.753042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.753063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.753070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.758214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.758235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.758243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.763524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.763546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.763554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.768847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.768868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.768876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.773582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.773603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.773611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.778467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.778487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.778495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.783676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.783698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.783706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.790210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.790240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.796548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.796569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.796577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.801936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.801957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.801965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.807043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.202 [2024-11-20 12:43:38.807064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.202 [2024-11-20 12:43:38.807076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.202 [2024-11-20 12:43:38.812129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.812151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.812159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.817474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.817495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.817503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.823805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.823828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.823835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.830969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.830992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.831000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.836144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.836166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.836174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.841217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.841240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.841248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.846545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.846567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.846574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.851770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.851792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.851800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.858199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.858226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.858234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.865000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.865022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.865030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.871967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.871989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.871998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.879144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.879165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.879173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.884712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.884734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.884742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.889703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.889724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.889733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.894572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.894593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.894600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.897780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.897801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.903819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.903840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.903848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.907955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.907976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.907984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.911998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.912020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.912028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.916400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.916427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.920886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.920907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.920915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.925491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.925512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.925519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.930069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.930090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.930098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.934714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.203 [2024-11-20 12:43:38.934736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.203 [2024-11-20 12:43:38.934745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.203 [2024-11-20 12:43:38.939167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.204 [2024-11-20 12:43:38.939188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.204 [2024-11-20 12:43:38.939196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.204 [2024-11-20 12:43:38.943760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.204 [2024-11-20 12:43:38.943781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.204 [2024-11-20 12:43:38.943793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.204 [2024-11-20 12:43:38.948401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.204 [2024-11-20 12:43:38.948430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.204 [2024-11-20 12:43:38.948438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.204 [2024-11-20 12:43:38.952621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.204 [2024-11-20 12:43:38.952644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.204 [2024-11-20 12:43:38.952652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.204 [2024-11-20 12:43:38.957048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.204 [2024-11-20 12:43:38.957070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.204 [2024-11-20 12:43:38.957079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.961382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.961403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.961418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.966101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.966124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.966132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.970910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.970932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.970939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.975435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.975456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.975464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.979892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.979913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.979920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.984097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.984119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.984127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.988545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.988567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.988574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.993185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.993206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.993213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:38.997793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:38.997814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:38.997821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.002392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.002418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.002426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.007058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.007079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.007086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.011574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.011595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.011603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.015999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.016020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.020424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.020445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.020456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.024945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.024966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.024974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.029486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.029507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.029515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.034000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.034021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.465 [2024-11-20 12:43:39.034029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.465 [2024-11-20 12:43:39.038618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.465 [2024-11-20 12:43:39.038640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.038648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.043091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.043112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.043120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.047689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.047710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.047718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.052225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.052246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.052253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.056679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.056701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.056708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.061205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.061232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.061240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.065860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.065883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.065890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.070455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.070477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.070485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.074349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.074370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.074378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.077060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.077082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.077089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.082028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.082048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.086030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.086050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.086058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.090550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.090571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.090579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.095049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.095070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.095077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.099344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.099365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.104599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.104621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.104628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.109108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.109130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.109138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.113557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.113578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.113585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.117992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.118013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.118021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.122502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.122523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.122531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.126849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.126870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.126878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.131235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.131256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.131264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.135321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.135355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.139699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.139720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.139727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.144097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.144117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.144124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.148574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.148595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.153024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.153044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.153052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.157579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.157600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.157608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.162099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.162119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.162127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.466 [2024-11-20 12:43:39.166663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.466 [2024-11-20 12:43:39.166684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.466 [2024-11-20 12:43:39.166692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.171196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.171217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.171225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.175629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.175654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.175662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.180180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.180201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.180209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.184655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.184677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.184685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.189163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.189184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.189191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.193641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.193662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.193670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.197803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.197825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.197833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.202210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.202232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.202239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.206645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.206665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.206673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.211128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.211150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.211157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.215505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.215526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.215533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.467 [2024-11-20 12:43:39.219903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.467 [2024-11-20 12:43:39.219925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.467 [2024-11-20 12:43:39.219932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.224315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.727 [2024-11-20 12:43:39.224336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-11-20 12:43:39.224344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.228689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.727 [2024-11-20 12:43:39.228710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-11-20 12:43:39.228718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.233104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.727 [2024-11-20 12:43:39.233126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-11-20 12:43:39.233133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.237387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.727 [2024-11-20 12:43:39.237407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-11-20 12:43:39.237427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.239909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.727 [2024-11-20 12:43:39.239930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-11-20 12:43:39.239937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.244478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.727 [2024-11-20 12:43:39.244497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-11-20 12:43:39.244505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.727 [2024-11-20 12:43:39.248982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.249002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.249013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.253355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.253376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.253384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.257791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.257811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.257819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.262203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.262224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.262232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.266676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.266697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.266704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.271170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.271191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.271198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.275369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.275390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.275397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.279814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.279836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.279843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.284291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.284312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.284319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.288815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.288839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.288846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.293278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.293300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.293307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.297788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.297809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.297816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.302331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.302352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.302359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.306827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.306848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.306855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.311373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.311394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.311402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.315954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.315976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.315984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.320471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.320492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.325027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.325047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.325055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.329582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.329604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.329613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.334106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.334128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.334136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.338642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.338665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.338673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.343074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.343095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.343102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.347828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.347851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-11-20 12:43:39.347858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.728 [2024-11-20 12:43:39.352258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.728 [2024-11-20 12:43:39.352278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.352286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.356765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.356786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.356794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.361278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.361299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.361306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.365680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.365701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.365712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.370101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.370123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.370130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.374517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.374537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.374545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.378941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.378963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.378970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.383308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.383329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.383337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.387741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.387762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.387769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.392149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.392170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.392177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.396597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.396619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.396626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.401064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.401085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.401092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.405512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.405532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.405539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.409980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.410001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.410009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.414407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.414432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.414440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.418867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.418888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.418896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.423265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.423285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.423293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.427669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.427690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.427697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.432080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.432101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.432109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.436550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.436578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.441092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.441114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.441125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.445644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.445664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.445672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.449782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.449804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.449812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.729 [2024-11-20 12:43:39.454232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.729 [2024-11-20 12:43:39.454253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-11-20 12:43:39.454261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.730 [2024-11-20 12:43:39.456783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.456804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.456811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.730 [2024-11-20 12:43:39.461175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.461195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.461203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.730 [2024-11-20 12:43:39.465483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.465503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.465510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.730 [2024-11-20 12:43:39.469728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.469748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.469756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.730 [2024-11-20 12:43:39.474214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.474234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.474241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.730 6727.00 IOPS, 840.88 MiB/s [2024-11-20T11:43:39.494Z] [2024-11-20 12:43:39.479832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.479856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.479864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.730 [2024-11-20 12:43:39.485096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.730 [2024-11-20 12:43:39.485119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-11-20 12:43:39.485127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.492069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.492093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.492102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.497892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.497915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.497923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.503291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.503313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.503321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.509164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.509186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.509194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.514355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.514377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.514385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.520700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.520724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.520732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.527342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.527365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.527373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.533751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.990 [2024-11-20 12:43:39.533773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.990 [2024-11-20 12:43:39.533781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.990 [2024-11-20 12:43:39.540538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.540561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.540569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.547466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.547487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.547495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.554409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.554437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.554445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.561266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.561288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.561296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.568167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.568190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.568199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.575428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.575449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.575458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.582039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.582061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.582069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.586590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.586613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.586624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.591906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.591928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.591936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.596355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.596376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.596383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.600838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.600868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.605564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.605585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.605592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.610078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.610099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.610106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.614118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.614138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.614146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.616854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.616875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.616882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.621267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.621288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.621296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.625798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.625822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.625830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.630452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.630473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.630481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.634918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.634939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.634947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.639450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.639470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.643834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.643855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.643862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.648301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.648323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.991 [2024-11-20 12:43:39.648330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.991 [2024-11-20 12:43:39.652715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.991 [2024-11-20 12:43:39.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.657165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.657186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.661551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.661572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.661579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.666057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.666078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.666085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.670570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.670591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.670598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.674971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.674991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.674998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.679446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.679466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.679474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.683812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.683832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.683839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.688243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.688263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.688271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.692638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.692658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.697059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.697079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.697086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.701525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.701545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.701556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.706011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.706032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.706039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.710545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.710566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.710573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.715627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.715646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.715654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.719553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.719573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.719580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.724037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.724058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.724066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.728530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.728550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.728558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.732974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.732994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.733002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.737456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.737476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.737484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.741826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.741847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.741854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.992 [2024-11-20 12:43:39.746281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:33.992 [2024-11-20 12:43:39.746301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.992 [2024-11-20 12:43:39.746309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.751198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.751219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.751226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.755003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.755023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.755030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.759447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.759467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.759474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.763996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.764017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.764024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.768544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.768565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.768573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.773042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.773063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.773070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.777387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.777409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.777426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.782004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.782025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.782033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.786588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.786610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.786617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.790981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.791002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.791009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.795440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.795462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.795469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.799842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.799863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.799870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.804226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.804254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.808690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.808711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.808719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.813128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.813149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.813157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.817624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.817649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.817656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.822194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.822215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.822222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.826727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.826748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.826756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.831307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.253 [2024-11-20 12:43:39.831328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.253 [2024-11-20 12:43:39.831336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.253 [2024-11-20 12:43:39.835752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.835772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.835780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.840267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.840287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.840295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.844690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.844711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.844718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.849106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.849126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.849135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.853546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.853566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.853573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.857950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.857971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.857978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.862392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.862419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.862427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.866892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.866913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.866921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.871459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.871480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.871487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.875896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.875917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.875924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.880391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.880419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.880427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.884861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.884882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.884889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.889312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.889333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.889340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.893676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.893696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.893707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.898284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.898305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.898313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.902532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.902554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.902562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.906857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.906879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.906886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.911199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.911221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.911228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.915515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.915536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.915543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.919619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.919640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.919648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.923957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.923978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.923985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.928318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.928338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.928346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.932670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.932695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.932702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.937127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.937147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.937154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.941529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.941549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.941557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.945960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.945981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.945989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.950401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.254 [2024-11-20 12:43:39.950428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.254 [2024-11-20 12:43:39.950436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.254 [2024-11-20 12:43:39.954860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.954880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.954887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.959249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.959269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.959278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.963773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.963793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.963800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.968155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.968177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.968185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.972569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.972590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.972597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.976987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.977008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.977015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.981332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.981353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.981360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.985719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.985739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.985747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.990213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.990234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.990242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.994602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.994623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.994631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:39.999085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:39.999106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:39.999114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:40.003665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:40.003687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:40.003694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.255 [2024-11-20 12:43:40.008495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.255 [2024-11-20 12:43:40.008520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.255 [2024-11-20 12:43:40.008534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.515 [2024-11-20 12:43:40.013358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.515 [2024-11-20 12:43:40.013381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.515 [2024-11-20 12:43:40.013388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.515 [2024-11-20 12:43:40.017789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.515 [2024-11-20 12:43:40.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.515 [2024-11-20 12:43:40.017818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.515 [2024-11-20 12:43:40.021886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.515 [2024-11-20 12:43:40.021909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.021917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.025944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.025967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.025975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.030204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.030226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.030233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.035887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.035915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.035925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.040392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.040419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.040428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.042841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.042862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.042870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.047217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.047242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.047251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.051603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.051624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.051632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.056004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.056024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.056032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.060437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.060458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.060466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.065212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.065236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.065244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.069802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.069824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.069832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.074380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.074401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.074409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.078807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.078828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.078836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.083041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.083064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.083072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.087592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.087614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.087625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.092068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.092091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.092098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.096607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.096629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.096638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.101106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.101128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.101135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.105705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.105727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.105734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.110284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.110305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.110313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.114814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.114835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.114842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.119321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.119342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.119350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.123831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.123870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.123878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.128321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.128342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.128350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.132754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.132776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.132783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.137176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.516 [2024-11-20 12:43:40.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.516 [2024-11-20 12:43:40.137205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.516 [2024-11-20 12:43:40.141730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.141751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.141759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.146373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.146393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.146401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.150912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.150933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.150942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.155322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.155342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.155350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.159798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.159820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.159828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.164224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.164245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.164252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.168754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.168775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.168783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.173245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.173266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.173273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.177743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.177764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.177772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.181906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.181929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.181936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.186343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.186364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.186372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.190747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.190767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.190775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.194990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.195010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.195018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.199230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.199251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.199263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.203642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.203663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.203670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.208047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.208067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.208075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.212486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.212508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.212515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.216997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.217018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.217025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.221402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.221429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.221436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.225877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.225897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.225904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.230329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.230350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.230358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.234780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.234801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.234808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.239323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.239351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.239359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.243847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.243868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.243875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.248306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.248328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.248336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.252792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.252813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.252821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.257354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.517 [2024-11-20 12:43:40.257375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.517 [2024-11-20 12:43:40.257384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.517 [2024-11-20 12:43:40.261762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.518 [2024-11-20 12:43:40.261784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.518 [2024-11-20 12:43:40.261792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.518 [2024-11-20 12:43:40.266188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.518 [2024-11-20 12:43:40.266209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.518 [2024-11-20 12:43:40.266217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.518 [2024-11-20 12:43:40.270571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.518 [2024-11-20 12:43:40.270593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.518 [2024-11-20 12:43:40.270601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.518 [2024-11-20 12:43:40.274940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.518 [2024-11-20 12:43:40.274962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.518 [2024-11-20 12:43:40.274969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.279361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.279380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.279387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.283799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.283820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.283827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.288276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.288296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.288304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.292656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.292677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.292684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.297110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.297130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.301519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.301540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.301547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.305932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.305952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.305960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.310312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.310332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.310340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.314704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.314725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.314736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.319113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.319134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.319142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.323523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.323544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.323552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.327930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.327950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.327957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.332399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.332424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.332432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.336917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.336938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.336945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.340930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.340951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.340958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.343706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.343725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.343732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.347269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.347297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.351518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.351542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.351550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.355757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.355778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.355786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.360074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.360094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.778 [2024-11-20 12:43:40.360102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.778 [2024-11-20 12:43:40.364344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.778 [2024-11-20 12:43:40.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.364373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.368811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.368839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.373231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.373253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.373260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.378448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.378470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.378478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.385047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.385069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.385077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.391132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.391154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.391162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.396449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.396471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.396478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.401785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.401806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.401814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.406945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.406967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.406974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.412278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.412299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.412307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.416756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.416778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.416786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.421524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.421545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.421553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.426331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.426352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.426360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.431534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.431563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.438448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.438469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.438480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.444375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.444396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.444405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.450484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.450506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.450514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.457121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.457143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.457151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.462316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.462338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.462346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.467182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.467203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.467211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.472463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.472484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.472492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.779 [2024-11-20 12:43:40.478878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe03de0) 00:29:34.779 [2024-11-20 12:43:40.478899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.779 [2024-11-20 12:43:40.478907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.779 6691.00 IOPS, 836.38 MiB/s 00:29:34.779 Latency(us) 00:29:34.779 [2024-11-20T11:43:40.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.779 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:34.779 nvme0n1 : 2.00 6692.60 836.58 0.00 0.00 2387.87 532.48 7536.64 00:29:34.779 [2024-11-20T11:43:40.543Z] =================================================================================================================== 00:29:34.779 [2024-11-20T11:43:40.543Z] Total : 6692.60 836.58 0.00 0.00 2387.87 532.48 7536.64 00:29:34.779 { 00:29:34.779 "results": [ 00:29:34.779 { 00:29:34.779 "job": "nvme0n1", 00:29:34.779 "core_mask": "0x2", 00:29:34.779 "workload": "randread", 00:29:34.779 "status": "finished", 00:29:34.779 "queue_depth": 16, 00:29:34.779 "io_size": 131072, 00:29:34.779 "runtime": 2.003855, 00:29:34.779 "iops": 6692.600013474029, 00:29:34.779 "mibps": 836.5750016842536, 00:29:34.779 "io_failed": 0, 00:29:34.779 "io_timeout": 0, 00:29:34.779 "avg_latency_us": 2387.874763321832, 00:29:34.779 "min_latency_us": 532.48, 00:29:34.779 "max_latency_us": 7536.64 00:29:34.779 } 00:29:34.779 ], 00:29:34.779 "core_count": 1 00:29:34.779 } 00:29:34.779 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.779 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.779 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.779 | .driver_specific 00:29:34.779 | .nvme_error 00:29:34.779 | .status_code 00:29:34.779 | .command_transient_transport_error' 00:29:34.779 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 433 > 0 )) 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1093410 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1093410 ']' 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1093410 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093410 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093410' 00:29:35.039 killing process with pid 1093410 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1093410 00:29:35.039 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.039 00:29:35.039 Latency(us) 00:29:35.039 [2024-11-20T11:43:40.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.039 [2024-11-20T11:43:40.803Z] =================================================================================================================== 00:29:35.039 [2024-11-20T11:43:40.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.039 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1093410 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1093986 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1093986 /var/tmp/bperf.sock 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1093986 ']' 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.299 12:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.299 [2024-11-20 12:43:40.951406] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:35.299 [2024-11-20 12:43:40.951461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093986 ] 00:29:35.299 [2024-11-20 12:43:41.024316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.299 [2024-11-20 12:43:41.058095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.558 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.558 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:35.558 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.558 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.817 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:35.818 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.818 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.818 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.818 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.818 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.077 nvme0n1 00:29:36.077 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:36.077 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.077 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.077 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.077 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:36.077 12:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.077 Running I/O for 2 seconds... 00:29:36.077 [2024-11-20 12:43:41.828427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.077 [2024-11-20 12:43:41.828553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.077 [2024-11-20 12:43:41.828586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.077 [2024-11-20 12:43:41.837134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.077 [2024-11-20 12:43:41.837247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.077 [2024-11-20 12:43:41.837268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.845812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.845923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.845943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.854486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.854594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.854612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.863136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.863244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.863262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.871766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.871876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.871894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.880424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.880530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.880549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.889187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.889298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.889316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.897845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.897952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.897970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.906463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.906578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.906597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.915108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.915216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.915235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.923733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.923840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.923858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.932346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.932461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.932478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.940958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.941064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.941081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.949577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.949686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.949704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.958182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.958289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.958308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.966796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.966903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.966924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.975399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.975515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.975533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.984039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.984147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.984166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:41.992657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:41.992763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:41.992780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:42.001254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:42.001361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.337 [2024-11-20 12:43:42.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.337 [2024-11-20 12:43:42.009858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.337 [2024-11-20 12:43:42.009966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.009984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.018465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.018574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.018592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.027069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.027178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.027197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.035675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.035783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.035801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.044281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.044390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.044408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.052883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.052990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.053012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.061486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.061596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.061614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.070092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.070199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.070217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.078694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.078802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.078820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.087300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.087409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.338 [2024-11-20 12:43:42.095913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.338 [2024-11-20 12:43:42.096022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.338 [2024-11-20 12:43:42.096040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.104525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.104634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.104652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.113134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.113260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.121749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.121856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.130339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.130453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.130471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.138948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.139056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.139074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.147553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.147664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.147681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.156153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.156260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.156278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.164762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.164870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.164889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.173366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.173492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.598 [2024-11-20 12:43:42.173510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.598 [2024-11-20 12:43:42.181991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.598 [2024-11-20 12:43:42.182098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.182116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.190612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.190721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.190739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.199219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.199328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.199349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.207833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.207939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.207957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.216434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.216542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.216560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.225033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.225140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.225158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.233643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.233750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.233767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.242225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.242332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.242351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.250859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.250965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.259439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.259691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.259712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.268043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.268148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.268166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.276640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.276753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.276771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.285243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.285350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.285368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.293911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.294018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.294036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.302521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.302629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.302648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.311135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.311243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.311261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.319743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.319851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.319868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.328356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.328472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.328490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.336978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.337085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.337103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.345714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.345824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.345843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.599 [2024-11-20 12:43:42.354331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.599 [2024-11-20 12:43:42.354556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.599 [2024-11-20 12:43:42.354577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.362956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.363065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.363083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.371594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.371702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.371720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.380241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.380348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.380366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.388861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.388969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.388988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.397490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.397598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.397616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.406096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.406204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.406223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.414713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.414821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.414840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.423355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.423473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.423495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.431975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.432100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.440602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.440709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.440727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.449217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.449325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.449343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.457829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.457938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.457956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.466441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.466588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.475048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.475155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.475172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.483647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.483753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.483772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.492270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.492379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.492397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.500892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.501005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.501024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.859 [2024-11-20 12:43:42.509514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.859 [2024-11-20 12:43:42.509621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.859 [2024-11-20 12:43:42.509640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.518121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.518230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.518247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.526742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.526851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.526869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.535355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.535569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.535590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.543966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.544074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.544092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.552574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.552682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.552700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.561169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.561275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.561293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.569788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.569896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.569914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.578391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.578505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.578523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.587006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.587114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.587133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.595637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.595747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.595765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.604264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.604372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.604391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:36.860 [2024-11-20 12:43:42.612878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:36.860 [2024-11-20 12:43:42.612985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.860 [2024-11-20 12:43:42.613005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.621515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.621623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.621642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.630301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.630416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.630436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.638946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.639056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.639075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.647533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.647642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.647665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.656163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.656270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.656288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.664775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.664884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.664902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.673397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.673515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.673534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.682004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.682110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.682128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.690624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.690732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.690750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.699230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.699338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.699356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.707854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.707962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.707981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.716476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.716585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.716603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.725081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.725193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.725212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.733701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.733809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.733826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.742297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.742432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.750928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.751035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.751053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.759541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.759651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.759671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.768184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.768290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.768310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.776814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.776920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.776938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.120 [2024-11-20 12:43:42.785424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.120 [2024-11-20 12:43:42.785533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.120 [2024-11-20 12:43:42.785550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.794052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.794161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.794179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.802649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.802755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.802773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.811278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.811385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.819859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.820443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.820463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 29531.00 IOPS, 115.36 MiB/s [2024-11-20T11:43:42.885Z] [2024-11-20 12:43:42.828472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.828580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.828597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.837074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.837181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.837198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.845738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.845846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.845863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.854329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.854444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.862931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.863037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.863055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.871515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.121 [2024-11-20 12:43:42.871623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.121 [2024-11-20 12:43:42.871644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.121 [2024-11-20 12:43:42.880083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.880190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.880209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.888705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.888815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.888832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.897434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.897542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.897559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.906032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.906141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.906158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.914613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.914720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.914739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.923231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.923341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.923359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.931827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.931936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.931955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.940425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.940533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.940551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.948999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.949109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.949128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.957610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.957718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.957735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.966196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.966305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.966323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.974797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.974904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.974921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.983390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.983504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.983522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:42.991992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:42.992098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:42.992116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.000587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.000695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.000713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.009172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.009279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.009297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.017766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.017874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.017891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.026361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.026476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.026494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.034965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.035075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.035093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.043568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.043678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.043696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.052155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.052263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.052281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.381 [2024-11-20 12:43:43.060746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.381 [2024-11-20 12:43:43.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.381 [2024-11-20 12:43:43.060874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.069336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.069453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.069472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.077932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.078038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.078055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.086529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.086636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.086654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.095125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.095233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.095254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.103779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.103888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.103905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.112371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.112486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.112504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.120974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.121081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.121100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.129571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.129680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.129697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.382 [2024-11-20 12:43:43.138161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.382 [2024-11-20 12:43:43.138268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.382 [2024-11-20 12:43:43.138285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.146749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.146857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.146875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.155332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.155447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.155465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.163930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.164039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.164057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.172539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.172652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.172670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.181138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.181245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.181263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.189731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.189840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.189858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.198330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.198447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.198464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.206933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.207042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.207061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.215542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.215649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.215665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.224147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.224253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.224271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.232736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.232842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.641 [2024-11-20 12:43:43.232859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.641 [2024-11-20 12:43:43.241333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.641 [2024-11-20 12:43:43.241450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.241467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.249926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.250032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.250049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.258519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.258627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.258645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.267096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.267202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.267220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.275710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.275818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.275836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.284274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.284382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.284401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.292891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.292998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.293016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.301466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.301574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.301593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.310088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.310195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.310213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.318938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.319047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.319068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.327540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.327648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.327667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.336136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.336243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.336262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.344749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.344856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.344874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.353346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.353463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.353482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.361960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.362065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.362085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.370563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.370672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.370690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.379155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.379262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.379281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.387748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.387854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.387872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.642 [2024-11-20 12:43:43.396365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.642 [2024-11-20 12:43:43.396487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.642 [2024-11-20 12:43:43.396506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.404989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.405094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.405111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.413603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.413710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.413730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.422201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.422306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.422323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.430799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.430905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.430923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.439403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.439517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.439543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.448025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.448133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.448151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.456613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.456720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.456738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.465230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.465337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.465355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.473828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.473936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.473953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.482429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.482537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.482554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.491033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.491142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.491160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.499649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.499756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.499774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.508244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.508353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.508370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.516844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.516950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.516967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.525433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.525541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.525558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.534031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.534139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.534156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.542630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.542758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.551237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.551362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.559827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.559935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.559953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.568428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.902 [2024-11-20 12:43:43.568536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.902 [2024-11-20 12:43:43.568554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.902 [2024-11-20 12:43:43.577019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.577125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.577143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.585614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.585720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.585738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.594220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.594343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.602815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.602921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.602938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.611398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.611511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.611528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.620004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.620116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.620134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.628776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.628885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.628903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.637398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.637511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.637529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.645997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.646104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.646121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.903 [2024-11-20 12:43:43.654570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:37.903 [2024-11-20 12:43:43.654678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.903 [2024-11-20 12:43:43.654698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.663174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.663281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.663299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.671762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.671868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.671886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.680372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.680484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.680502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.688957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.689063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.689081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.697569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.697678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.697696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.706150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.706258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.706277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.714761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.714867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.714885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.723336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.723450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.723469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.731952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.162 [2024-11-20 12:43:43.732060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.162 [2024-11-20 12:43:43.732077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.162 [2024-11-20 12:43:43.740548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.740656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.740675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.749131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.749238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.749256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.757723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.757829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.757847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.766309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.766419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.766442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.774915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.775021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.775039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.783508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.783617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.783635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.792103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.792209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.792226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.800703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.800810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.800828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.809294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.809400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.809422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 [2024-11-20 12:43:43.817887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8a80) with pdu=0x2000166fda78 00:29:38.163 [2024-11-20 12:43:43.817994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.163 [2024-11-20 12:43:43.818012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.163 29623.00 IOPS, 115.71 MiB/s 00:29:38.163 Latency(us) 00:29:38.163 [2024-11-20T11:43:43.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.163 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.163 nvme0n1 : 2.00 29623.06 115.72 0.00 0.00 4314.08 3217.22 9115.46 00:29:38.163 [2024-11-20T11:43:43.927Z] =================================================================================================================== 00:29:38.163 [2024-11-20T11:43:43.927Z] Total : 29623.06 115.72 0.00 0.00 4314.08 3217.22 9115.46 00:29:38.163 { 00:29:38.163 "results": [ 00:29:38.163 { 00:29:38.163 "job": "nvme0n1", 00:29:38.163 "core_mask": "0x2", 00:29:38.163 "workload": "randwrite", 00:29:38.163 "status": "finished", 00:29:38.163 "queue_depth": 128, 00:29:38.163 "io_size": 4096, 00:29:38.163 "runtime": 2.004047, 00:29:38.163 "iops": 29623.057742657733, 00:29:38.163 "mibps": 115.71506930725677, 00:29:38.163 "io_failed": 0, 00:29:38.163 "io_timeout": 0, 00:29:38.163 "avg_latency_us": 4314.080798253056, 00:29:38.163 "min_latency_us": 3217.221818181818, 00:29:38.163 "max_latency_us": 9115.461818181819 00:29:38.163 } 00:29:38.163 ], 00:29:38.163 "core_count": 1 00:29:38.163 } 00:29:38.163 12:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.163 12:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.163 | .driver_specific 00:29:38.163 | .nvme_error 00:29:38.163 | .status_code 00:29:38.163 | .command_transient_transport_error' 00:29:38.163 12:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.163 12:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1093986 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1093986 ']' 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1093986 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093986 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093986' 00:29:38.422 killing process with pid 1093986 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1093986 00:29:38.422 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.422 00:29:38.422 Latency(us) 00:29:38.422 [2024-11-20T11:43:44.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.422 [2024-11-20T11:43:44.186Z] =================================================================================================================== 00:29:38.422 [2024-11-20T11:43:44.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.422 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1093986 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1094684 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1094684 /var/tmp/bperf.sock 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1094684 ']' 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.681 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.681 [2024-11-20 12:43:44.281044] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:38.681 [2024-11-20 12:43:44.281089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094684 ] 00:29:38.681 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.681 Zero copy mechanism will not be used. 00:29:38.681 [2024-11-20 12:43:44.353113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.681 [2024-11-20 12:43:44.392402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.940 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.200 nvme0n1 00:29:39.200 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:39.200 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.200 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.200 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.200 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:39.200 12:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.460 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.460 Zero copy mechanism will not be used. 00:29:39.460 Running I/O for 2 seconds... 00:29:39.460 [2024-11-20 12:43:45.005820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.005889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.005915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.010242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.010305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.010327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.014100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.014175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.017881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.017947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.017964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.021613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.021680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.021698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.025252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.025316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.025334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.028916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.028968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.028987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.032469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.032537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.032555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.036046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.036109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.036126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.039691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.039755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.039772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.043248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.460 [2024-11-20 12:43:45.043308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.460 [2024-11-20 12:43:45.043326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.460 [2024-11-20 12:43:45.046880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.046955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.046972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.050382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.050462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.050480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.053970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.054035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.054052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.057537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.057600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.057617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.061000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.061063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.061081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.064535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.064621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.064639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.068039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.068113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.068130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.071583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.071636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.071657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.075554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.075616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.075633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.079717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.079769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.079786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.084475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.084528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.088796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.088872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.088890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.093207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.093285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.093304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.097731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.097806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.097824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.102664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.102762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.107724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.107779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.107796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.111939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.112269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.112289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.115820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.116094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.119648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.119925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.119944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.123733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.123987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.124007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.128946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.129202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.129221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.134723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.134960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.134979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.140447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.140652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.140670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.146162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.146388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.151898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.152133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.152152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.157836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.158102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.158122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.461 [2024-11-20 12:43:45.162924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.461 [2024-11-20 12:43:45.163205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.461 [2024-11-20 12:43:45.163225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.168070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.168250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.168267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.173197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.173482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.173502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.178455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.178815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.178834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.183535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.183777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.183796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.187914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.188241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.188260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.192937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.193147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.193166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.198385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.198641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.198664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.203762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.203926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.203944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.209519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.209759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.209779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.462 [2024-11-20 12:43:45.214949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.462 [2024-11-20 12:43:45.215092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.462 [2024-11-20 12:43:45.215110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.220342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.220562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.220581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.225597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.225801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.225820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.231172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.231298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.236352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.236568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.236588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.241739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.241889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.241907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.247366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.247624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.247643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.722 [2024-11-20 12:43:45.252477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.722 [2024-11-20 12:43:45.252682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.722 [2024-11-20 12:43:45.252699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.257694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.257944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.257963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.263045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.263292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.263311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.268131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.268366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.268386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.273762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.273987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.274005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.279040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.279310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.279329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.284157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.284383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.284403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.289389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.289603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.289620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.294453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.294682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.294701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.299318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.299523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.299540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.303851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.304101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.304120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.309075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.309304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.309324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.314084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.314254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.314273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.319792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.320075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.320094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.325038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.325293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.325311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.330119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.330360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.330379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.335335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.335571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.335594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.340205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.340407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.340430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.344466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.344573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.344590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.349644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.349732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.349755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.355060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.355168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.360649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.360864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.360884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.365992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.366078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.366097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.371470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.371682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.371701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.377337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.377448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.377466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.382232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.382298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.382315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.385876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.385944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.385962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.723 [2024-11-20 12:43:45.388760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.723 [2024-11-20 12:43:45.388815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.723 [2024-11-20 12:43:45.388833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.391633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.391685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.391703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.394479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.394534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.394551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.397164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.397217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.397234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.399884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.399961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.399978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.402649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.402734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.402752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.405381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.405451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.405470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.408098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.408177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.408196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.410840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.410903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.410920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.413564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.413638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.413655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.416298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.416368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.416385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.419043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.419169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.419187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.422389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.422477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.422495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.427291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.427378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.427396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.432273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.432460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.432477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.437727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.437834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.442812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.442885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.442903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.447988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.448091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.448108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.453218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.453317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.453335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.458406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.458522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.458540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.463680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.463846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.463864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.468977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.469063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.469081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.474092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.474265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.474282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.724 [2024-11-20 12:43:45.479370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.724 [2024-11-20 12:43:45.479473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.724 [2024-11-20 12:43:45.479491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.484585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.484684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.484702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.489840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.490000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.490018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.495026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.495217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.495234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.500337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.500426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.500444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.505510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.505633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.505650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.510885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.510980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.510997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.515574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.515713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.515731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.520427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.520557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.520576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.525946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.526042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.526059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.531426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.531523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.531541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.537407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.537526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.537544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.542830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.542892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.542909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.547085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.547189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.547207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.550544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.550626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.550644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.553985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.554060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.554078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.558677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.558855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.558873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.563997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.564195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.564215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.569466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.569635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.569657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.575029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.575122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.575140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.580112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.580190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.580208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.585314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.585401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.585425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.590541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.590743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.985 [2024-11-20 12:43:45.590761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.985 [2024-11-20 12:43:45.595731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.985 [2024-11-20 12:43:45.595809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.595826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.600857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.601012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.601029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.606309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.606482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.606500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.611947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.612107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.612124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.617617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.617820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.617840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.622887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.623031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.623049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.628278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.628426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.628444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.633870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.633955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.633972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.638965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.639036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.639054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.644039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.644119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.644138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.649246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.649316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.649333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.654447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.654529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.654547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.659779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.659870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.659888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.665115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.665217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.665235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.670337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.670420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.670438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.675665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.675779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.675797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.681698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.681783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.681801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.687578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.687727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.687745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.693481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.693598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.693615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.699173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.699262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.699280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.704563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.704757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.704774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.710140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.710227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.710249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.715480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.715549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.715567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.720818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.720892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.720910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.725918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.726099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.726117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.986 [2024-11-20 12:43:45.731021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.986 [2024-11-20 12:43:45.731175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.986 [2024-11-20 12:43:45.731192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.987 [2024-11-20 12:43:45.736064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.987 [2024-11-20 12:43:45.736261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.987 [2024-11-20 12:43:45.736281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.987 [2024-11-20 12:43:45.741285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:39.987 [2024-11-20 12:43:45.741471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.987 [2024-11-20 12:43:45.741489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.746488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.746689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.746706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.751295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.751435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.751453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.756051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.756145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.756163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.761192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.761361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.761379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.767021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.767245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.767265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.772509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.772732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.777949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.778027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.778046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.782709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.782793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.782811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.786394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.786468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.786485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.789925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.789996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.790014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.793445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.793530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.796901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.796973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.796991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.801623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.801765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.801783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.805537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.805605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.805623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.808939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.809026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.809043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.811896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.811979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.811996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.814742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.814812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.814829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.817602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.817673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.817690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.820826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.820932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.820950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.824982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.825188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.247 [2024-11-20 12:43:45.825212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.247 [2024-11-20 12:43:45.830058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.247 [2024-11-20 12:43:45.830244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.830262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.835662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.835729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.835747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.841154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.841228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.841245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.846575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.846654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.846671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.852181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.852267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.852286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.857931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.858012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.858030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.863421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.863490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.863508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.868972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.869074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.869092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.874585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.874663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.874681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.880322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.880420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.880438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.885669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.885755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.885773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.891129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.891282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.896243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.896459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.896478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.901698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.901917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.901936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.907013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.907112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.907130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.912644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.912843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.912862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.918087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.918308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.918327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.923879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.924002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.924020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.929556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.929726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.929743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.935250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.935326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.935344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.940847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.941036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.941054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.946491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.946574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.951953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.952029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.952047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.957802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.957904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.957922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.963542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.963721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.963739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.969365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.969461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.969482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.974997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.248 [2024-11-20 12:43:45.975071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.248 [2024-11-20 12:43:45.975089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.248 [2024-11-20 12:43:45.980671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.249 [2024-11-20 12:43:45.980823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.249 [2024-11-20 12:43:45.980841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.249 [2024-11-20 12:43:45.986225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.249 [2024-11-20 12:43:45.986304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.249 [2024-11-20 12:43:45.986322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.249 [2024-11-20 12:43:45.991919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.249 [2024-11-20 12:43:45.992014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.249 [2024-11-20 12:43:45.992032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.249 [2024-11-20 12:43:45.997393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.249 [2024-11-20 12:43:45.997484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.249 [2024-11-20 12:43:45.997502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.249 [2024-11-20 12:43:46.003142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.249 [2024-11-20 12:43:46.003327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.249 [2024-11-20 12:43:46.003345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.508 6400.00 IOPS, 800.00 MiB/s [2024-11-20T11:43:46.272Z] [2024-11-20 12:43:46.009304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.508 [2024-11-20 12:43:46.009501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.508 [2024-11-20 12:43:46.009519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.508 [2024-11-20 12:43:46.014507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.508 [2024-11-20 12:43:46.014575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.508 [2024-11-20 12:43:46.014592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.508 [2024-11-20 12:43:46.019699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.508 [2024-11-20 12:43:46.019787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.508 [2024-11-20 12:43:46.019805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.508 [2024-11-20 12:43:46.025007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.508 [2024-11-20 12:43:46.025120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.508 [2024-11-20 12:43:46.025138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.508 [2024-11-20 12:43:46.030143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.508 [2024-11-20 12:43:46.030313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.508 [2024-11-20 12:43:46.030330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.508 [2024-11-20 12:43:46.034478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.508 [2024-11-20 12:43:46.034551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.034570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.039155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.039320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.039337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.044005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.044118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.044136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.047579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.047659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.050448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.050508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.050525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.053216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.053277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.053294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.055911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.055965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.058638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.058701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.058719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.061328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.061385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.061402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.064048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.064107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.064125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.066744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.066821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.066838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.069453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.069530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.069547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.072158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.072219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.072237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.075002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.075123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.075140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.078418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.078602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.078622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.083493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.083569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.083587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.087681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.087813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.087831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.091425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.091531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.091549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.094883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.094962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.094980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.098197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.098319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.098337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.101189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.101254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.101271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.103874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.103949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.103966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.106740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.106834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.106852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.109655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.109721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.109738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.112327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.112408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.112430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.115024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.115075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.115093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.117691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.117800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.509 [2024-11-20 12:43:46.117817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.509 [2024-11-20 12:43:46.120378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.509 [2024-11-20 12:43:46.120470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.120487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.123248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.123316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.123333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.126436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.126503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.126520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.130033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.130103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.130120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.132980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.133039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.133056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.135909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.135985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.136003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.138835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.138922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.138940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.141524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.141581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.141598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.144185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.144257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.144276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.146882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.146951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.146968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.149541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.149639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.149656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.152344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.152440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.152457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.155455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.155517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.155534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.159436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.159533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.159553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.163824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.164013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.164031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.169288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.169365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.169383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.174633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.174885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.174904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.180210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.180295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.180313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.185801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.185877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.185895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.191265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.191461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.191479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.196936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.197011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.197029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.201646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.201740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.201758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.206087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.206147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.206164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.209962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.210051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.210069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.212787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.212834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.212852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.215616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.215671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.218367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.218429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.218447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.221100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.221154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.221171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.223867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.223918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.223936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.226633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.226703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.226720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.229331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.229388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.229406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.232006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.232084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.232102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.234723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.510 [2024-11-20 12:43:46.234775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.510 [2024-11-20 12:43:46.234792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.510 [2024-11-20 12:43:46.237428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.237481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.237499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.240146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.240200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.240218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.242852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.242902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.242919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.245546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.245663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.245681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.248909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.248997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.249014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.253530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.253709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.253727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.258304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.258368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.258389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.262561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.262676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.262694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.511 [2024-11-20 12:43:46.267137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.511 [2024-11-20 12:43:46.267186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.511 [2024-11-20 12:43:46.267204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.270820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.270872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.270890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.274183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.274236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.274254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.277536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.277648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.277665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.281440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.281490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.281508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.284751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.284807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.284827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.287996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.288084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.288103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.291211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.291286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.291304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.294345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.294396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.294421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.297603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.297669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.300664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.300714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.300732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.303771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.303830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.303847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.306649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.306703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.306720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.309366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.309423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.309441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.312056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.312118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.314765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.314810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.314827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.317449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.317505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.317522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.320789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.320890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.320908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.324002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.324103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.324121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.327941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.327989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.328008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.330954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.331013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.331031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.334139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.334186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.334204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.337219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.337285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.775 [2024-11-20 12:43:46.337303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.775 [2024-11-20 12:43:46.340521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.775 [2024-11-20 12:43:46.340568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.340585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.343690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.343742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.343763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.347016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.347081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.347099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.350118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.350216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.350235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.353313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.353375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.353392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.356450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.356526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.356543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.359641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.359694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.359712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.362794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.362865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.362883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.366048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.366099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.366117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.369056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.369105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.369122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.372146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.372219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.372237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.374830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.374880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.374898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.377554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.377607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.377625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.380227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.380280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.380297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.382906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.382952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.382969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.385561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.385621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.385639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.388230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.388286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.388303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.391396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.391474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.391492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.395408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.395596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.395614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.400471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.400558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.400575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.405451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.405619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.405636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.410525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.410713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.410731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.415606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.415799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.415816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.420655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.420815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.420833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.425736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.425827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.425844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.430704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.430895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.430912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.776 [2024-11-20 12:43:46.435815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.776 [2024-11-20 12:43:46.435990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.776 [2024-11-20 12:43:46.436008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.440867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.440939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.440960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.446195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.446262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.446279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.451300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.451476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.451494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.457168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.457400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.457425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.462545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.462741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.462760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.468227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.468358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.468375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.473579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.473825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.473844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.479208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.479402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.479424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.484212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.484394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.484423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.487634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.487765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.487783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.490392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.490534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.490551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.493154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.493275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.493292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.495939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.496074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.496091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.498660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.498782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.498799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.501391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.501519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.501537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.504086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.504210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.504228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.506794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.506940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.509466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.509603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.509620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.512188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.512333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.512351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.514877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.515008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.515026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.517577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.517700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.517717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.520240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.520372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.520391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.522936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.523066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.523084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.525607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.525737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.525754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.528261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.528400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.528423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.530930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.531060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.531077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.777 [2024-11-20 12:43:46.533630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:40.777 [2024-11-20 12:43:46.533767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.777 [2024-11-20 12:43:46.533788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.037 [2024-11-20 12:43:46.536687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.037 [2024-11-20 12:43:46.536827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.037 [2024-11-20 12:43:46.536845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.037 [2024-11-20 12:43:46.540818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.037 [2024-11-20 12:43:46.540948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.037 [2024-11-20 12:43:46.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.037 [2024-11-20 12:43:46.545816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.037 [2024-11-20 12:43:46.546018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.037 [2024-11-20 12:43:46.546037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.037 [2024-11-20 12:43:46.551007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.037 [2024-11-20 12:43:46.551235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.037 [2024-11-20 12:43:46.551254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.037 [2024-11-20 12:43:46.556245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.037 [2024-11-20 12:43:46.556446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.037 [2024-11-20 12:43:46.556464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.037 [2024-11-20 12:43:46.561438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.561609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.561627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.567177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.567338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.567356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.572754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.572934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.572951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.578517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.578633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.578651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.584580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.584654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.584672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.590525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.590633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.590651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.595783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.595954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.595971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.601644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.601717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.601734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.607106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.607190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.607207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.612993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.613178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.613195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.618629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.618840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.618860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.624245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.624371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.624389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.629743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.629837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.629855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.634956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.635161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.635180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.640200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.640382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.640399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.645616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.645818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.645837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.651328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.651533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.651550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.656557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.656718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.662537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.662717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.662734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.667779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.667841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.667858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.673290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.673489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.673509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.678781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.678858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.678875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.684502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.684579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.684598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.689790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.689916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.689934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.695428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.695656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.695675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.700663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.700740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.700759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.706000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.038 [2024-11-20 12:43:46.706072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.038 [2024-11-20 12:43:46.706090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.038 [2024-11-20 12:43:46.711703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.711784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.711801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.717049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.717116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.717134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.722528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.722696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.722713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.728329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.728452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.728470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.734033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.734207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.734225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.739814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.739909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.739926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.745313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.745514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.750660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.750823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.750841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.756420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.756589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.756607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.761850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.761941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.761958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.767393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.767565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.773189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.773275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.773293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.778615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.778819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.778838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.784205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.784404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.784434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.789928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.790178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.790197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.039 [2024-11-20 12:43:46.795535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.039 [2024-11-20 12:43:46.795746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.039 [2024-11-20 12:43:46.795765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.801114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.801348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.801367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.806974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.807142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.807159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.812784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.812859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.812878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.818654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.818765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.818786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.824279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.824467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.824485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.830109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.830191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.830209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.835745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.835926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.298 [2024-11-20 12:43:46.835944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.298 [2024-11-20 12:43:46.841461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.298 [2024-11-20 12:43:46.841645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.841662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.847164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.847273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.847291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.852926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.853022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.853040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.858828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.858899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.858916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.864545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.864629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.864647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.870465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.870646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.870664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.876198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.876407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.881938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.882009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.882026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.887639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.887729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.887747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.893030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.893207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.893225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.898852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.898973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.898991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.904863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.904969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.904986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.910477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.910676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.910693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.916359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.916547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.916566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.922318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.922389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.922407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.928050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.928127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.928145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.933697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.933862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.933879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.939130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.939325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.939343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.944831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.944922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.944940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.950350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.950450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.950468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.955980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.956175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.956193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.961618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.961797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.961814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.967678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.967805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.967826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.973351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.973500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.973519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.979232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.979348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.979366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.985092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.985201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.985218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.990226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.990400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.990424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:46.995579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:46.995743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:46.995760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:47.000515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:47.000592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:47.000610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:41.299 [2024-11-20 12:43:47.004684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:47.004854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:47.004871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:41.299 6846.50 IOPS, 855.81 MiB/s [2024-11-20T11:43:47.063Z] [2024-11-20 12:43:47.010592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a8f60) with pdu=0x2000166ff3c8 00:29:41.299 [2024-11-20 12:43:47.010700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.299 [2024-11-20 12:43:47.010719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:41.299 00:29:41.299 Latency(us) 00:29:41.299 [2024-11-20T11:43:47.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.299 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:41.299 nvme0n1 : 2.00 6842.19 855.27 0.00 0.00 2334.22 1273.48 6970.65 00:29:41.299 [2024-11-20T11:43:47.063Z] =================================================================================================================== 00:29:41.299 [2024-11-20T11:43:47.063Z] Total : 6842.19 855.27 0.00 0.00 2334.22 1273.48 6970.65 00:29:41.299 { 00:29:41.299 "results": [ 00:29:41.299 { 00:29:41.299 "job": "nvme0n1", 00:29:41.299 "core_mask": "0x2", 00:29:41.299 "workload": "randwrite", 00:29:41.299 "status": "finished", 00:29:41.299 "queue_depth": 16, 00:29:41.299 "io_size": 131072, 00:29:41.299 "runtime": 2.004183, 00:29:41.299 "iops": 6842.189560534142, 00:29:41.299 "mibps": 855.2736950667678, 00:29:41.299 "io_failed": 0, 00:29:41.299 "io_timeout": 0, 00:29:41.299 "avg_latency_us": 2334.2219945241077, 00:29:41.299 "min_latency_us": 1273.4836363636364, 00:29:41.299 "max_latency_us": 6970.647272727273 00:29:41.299 } 00:29:41.299 ], 00:29:41.299 "core_count": 1 00:29:41.299 } 00:29:41.299 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:41.299 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:41.299 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:41.299 | .driver_specific 00:29:41.299 | .nvme_error 00:29:41.299 | .status_code 00:29:41.299 | .command_transient_transport_error' 00:29:41.299 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 443 > 0 )) 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1094684 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1094684 ']' 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1094684 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094684 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094684' 00:29:41.558 killing process with pid 1094684 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1094684 00:29:41.558 Received shutdown signal, test time was about 2.000000 seconds 00:29:41.558 00:29:41.558 Latency(us) 00:29:41.558 [2024-11-20T11:43:47.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.558 [2024-11-20T11:43:47.322Z] =================================================================================================================== 00:29:41.558 [2024-11-20T11:43:47.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.558 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1094684 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1092843 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1092843 ']' 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1092843 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092843 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092843' 00:29:41.817 killing process with pid 1092843 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1092843 00:29:41.817 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1092843 00:29:42.077 00:29:42.077 real 0m13.810s 00:29:42.077 user 0m26.848s 00:29:42.077 sys 0m3.877s 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.077 ************************************ 00:29:42.077 END TEST nvmf_digest_error 00:29:42.077 ************************************ 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.077 rmmod nvme_tcp 00:29:42.077 rmmod nvme_fabrics 00:29:42.077 rmmod nvme_keyring 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1092843 ']' 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1092843 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1092843 ']' 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1092843 00:29:42.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1092843) - No such process 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1092843 is not found' 00:29:42.077 Process with pid 1092843 is not found 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.077 12:43:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.615 00:29:44.615 real 0m36.090s 00:29:44.615 user 0m55.229s 00:29:44.615 sys 0m12.508s 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:44.615 ************************************ 00:29:44.615 END TEST nvmf_digest 00:29:44.615 ************************************ 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.615 ************************************ 00:29:44.615 START TEST nvmf_bdevperf 00:29:44.615 ************************************ 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:44.615 * Looking for test storage... 00:29:44.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.615 12:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.615 --rc genhtml_branch_coverage=1 00:29:44.615 --rc genhtml_function_coverage=1 00:29:44.615 --rc genhtml_legend=1 00:29:44.615 --rc geninfo_all_blocks=1 00:29:44.615 --rc geninfo_unexecuted_blocks=1 00:29:44.615 00:29:44.615 ' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.615 --rc genhtml_branch_coverage=1 00:29:44.615 --rc genhtml_function_coverage=1 00:29:44.615 --rc genhtml_legend=1 00:29:44.615 --rc geninfo_all_blocks=1 00:29:44.615 --rc geninfo_unexecuted_blocks=1 00:29:44.615 00:29:44.615 ' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.615 --rc genhtml_branch_coverage=1 00:29:44.615 --rc genhtml_function_coverage=1 00:29:44.615 --rc genhtml_legend=1 00:29:44.615 --rc geninfo_all_blocks=1 00:29:44.615 --rc geninfo_unexecuted_blocks=1 00:29:44.615 00:29:44.615 ' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.615 --rc genhtml_branch_coverage=1 00:29:44.615 --rc genhtml_function_coverage=1 00:29:44.615 --rc genhtml_legend=1 00:29:44.615 --rc geninfo_all_blocks=1 00:29:44.615 --rc geninfo_unexecuted_blocks=1 00:29:44.615 00:29:44.615 ' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.615 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.616 12:43:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.190 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:29:51.191 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:29:51.191 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:29:51.191 Found net devices under 0000:1a:00.0: cvl_0_0 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:29:51.191 Found net devices under 0000:1a:00.1: cvl_0_1 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.191 12:43:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:29:51.191 00:29:51.191 --- 10.0.0.2 ping statistics --- 00:29:51.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.191 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:29:51.191 00:29:51.191 --- 10.0.0.1 ping statistics --- 00:29:51.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.191 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1098845 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1098845 00:29:51.191 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1098845 ']' 00:29:51.192 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.192 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.192 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.192 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.192 12:43:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.192 [2024-11-20 12:43:56.265793] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:51.192 [2024-11-20 12:43:56.265834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.192 [2024-11-20 12:43:56.340302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.192 [2024-11-20 12:43:56.380686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.192 [2024-11-20 12:43:56.380720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.192 [2024-11-20 12:43:56.380727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.192 [2024-11-20 12:43:56.380732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.192 [2024-11-20 12:43:56.380737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.192 [2024-11-20 12:43:56.382131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.192 [2024-11-20 12:43:56.382244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.192 [2024-11-20 12:43:56.382245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.452 [2024-11-20 12:43:57.117946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.452 Malloc0 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.452 [2024-11-20 12:43:57.176740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.452 { 00:29:51.452 "params": { 00:29:51.452 "name": "Nvme$subsystem", 00:29:51.452 "trtype": "$TEST_TRANSPORT", 00:29:51.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.452 "adrfam": "ipv4", 00:29:51.452 "trsvcid": "$NVMF_PORT", 00:29:51.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.452 "hdgst": ${hdgst:-false}, 00:29:51.452 "ddgst": ${ddgst:-false} 00:29:51.452 }, 00:29:51.452 "method": "bdev_nvme_attach_controller" 00:29:51.452 } 00:29:51.452 EOF 00:29:51.452 )") 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:51.452 12:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:51.452 "params": { 00:29:51.452 "name": "Nvme1", 00:29:51.452 "trtype": "tcp", 00:29:51.452 "traddr": "10.0.0.2", 00:29:51.452 "adrfam": "ipv4", 00:29:51.452 "trsvcid": "4420", 00:29:51.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.452 "hdgst": false, 00:29:51.452 "ddgst": false 00:29:51.452 }, 00:29:51.452 "method": "bdev_nvme_attach_controller" 00:29:51.452 }' 00:29:51.711 [2024-11-20 12:43:57.227280] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:51.711 [2024-11-20 12:43:57.227319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099082 ] 00:29:51.711 [2024-11-20 12:43:57.301729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.711 [2024-11-20 12:43:57.340030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.970 Running I/O for 1 seconds... 00:29:52.906 12578.00 IOPS, 49.13 MiB/s 00:29:52.906 Latency(us) 00:29:52.906 [2024-11-20T11:43:58.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.906 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:52.906 Verification LBA range: start 0x0 length 0x4000 00:29:52.906 Nvme1n1 : 1.01 12628.21 49.33 0.00 0.00 10095.72 1995.87 13583.83 00:29:52.906 [2024-11-20T11:43:58.670Z] =================================================================================================================== 00:29:52.906 [2024-11-20T11:43:58.670Z] Total : 12628.21 49.33 0.00 0.00 10095.72 1995.87 13583.83 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1099352 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.165 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.166 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.166 { 00:29:53.166 "params": { 00:29:53.166 "name": "Nvme$subsystem", 00:29:53.166 "trtype": "$TEST_TRANSPORT", 00:29:53.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.166 "adrfam": "ipv4", 00:29:53.166 "trsvcid": "$NVMF_PORT", 00:29:53.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.166 "hdgst": ${hdgst:-false}, 00:29:53.166 "ddgst": ${ddgst:-false} 00:29:53.166 }, 00:29:53.166 "method": "bdev_nvme_attach_controller" 00:29:53.166 } 00:29:53.166 EOF 00:29:53.166 )") 00:29:53.166 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:53.166 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:53.166 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:53.166 12:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.166 "params": { 00:29:53.166 "name": "Nvme1", 00:29:53.166 "trtype": "tcp", 00:29:53.166 "traddr": "10.0.0.2", 00:29:53.166 "adrfam": "ipv4", 00:29:53.166 "trsvcid": "4420", 00:29:53.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.166 "hdgst": false, 00:29:53.166 "ddgst": false 00:29:53.166 }, 00:29:53.166 "method": "bdev_nvme_attach_controller" 00:29:53.166 }' 00:29:53.166 [2024-11-20 12:43:58.862409] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:53.166 [2024-11-20 12:43:58.862467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099352 ] 00:29:53.425 [2024-11-20 12:43:58.936434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.425 [2024-11-20 12:43:58.971738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.682 Running I/O for 15 seconds... 00:29:55.555 12607.00 IOPS, 49.25 MiB/s [2024-11-20T11:44:01.888Z] 12686.50 IOPS, 49.56 MiB/s [2024-11-20T11:44:01.888Z] 12:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1098845 00:29:56.124 12:44:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:56.124 [2024-11-20 12:44:01.839051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.124 [2024-11-20 12:44:01.839228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.124 [2024-11-20 12:44:01.839236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.839991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.839996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.125 [2024-11-20 12:44:01.840298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.125 [2024-11-20 12:44:01.840304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.126 [2024-11-20 12:44:01.840756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.126 [2024-11-20 12:44:01.840968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.840975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2097cc0 is same with the state(6) to be set 00:29:56.126 [2024-11-20 12:44:01.840983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:56.126 [2024-11-20 12:44:01.840988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:56.126 [2024-11-20 12:44:01.840994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0 00:29:56.126 [2024-11-20 12:44:01.841001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.126 [2024-11-20 12:44:01.843596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.126 [2024-11-20 12:44:01.843647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.126 [2024-11-20 12:44:01.844163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-11-20 12:44:01.844179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-11-20 12:44:01.844187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.126 [2024-11-20 12:44:01.844347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.126 [2024-11-20 12:44:01.844511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.126 [2024-11-20 12:44:01.844521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.126 [2024-11-20 12:44:01.844528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.126 [2024-11-20 12:44:01.844534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.126 [2024-11-20 12:44:01.856425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.126 [2024-11-20 12:44:01.856774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-11-20 12:44:01.856793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-11-20 12:44:01.856801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.126 [2024-11-20 12:44:01.856961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.126 [2024-11-20 12:44:01.857121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.126 [2024-11-20 12:44:01.857131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.126 [2024-11-20 12:44:01.857138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.126 [2024-11-20 12:44:01.857145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.126 [2024-11-20 12:44:01.869066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.126 [2024-11-20 12:44:01.869487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-11-20 12:44:01.869505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-11-20 12:44:01.869512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.126 [2024-11-20 12:44:01.869672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.126 [2024-11-20 12:44:01.869830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.126 [2024-11-20 12:44:01.869839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.126 [2024-11-20 12:44:01.869846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.126 [2024-11-20 12:44:01.869853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.126 [2024-11-20 12:44:01.881889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.126 [2024-11-20 12:44:01.882349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-11-20 12:44:01.882370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-11-20 12:44:01.882378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.126 [2024-11-20 12:44:01.882543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.126 [2024-11-20 12:44:01.882702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.126 [2024-11-20 12:44:01.882712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.126 [2024-11-20 12:44:01.882718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.126 [2024-11-20 12:44:01.882724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.387 [2024-11-20 12:44:01.894676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.387 [2024-11-20 12:44:01.895017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-11-20 12:44:01.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-11-20 12:44:01.895041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.387 [2024-11-20 12:44:01.895199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.387 [2024-11-20 12:44:01.895357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.387 [2024-11-20 12:44:01.895367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.387 [2024-11-20 12:44:01.895374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.387 [2024-11-20 12:44:01.895380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.387 [2024-11-20 12:44:01.907251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.387 [2024-11-20 12:44:01.907649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-11-20 12:44:01.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-11-20 12:44:01.907674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.387 [2024-11-20 12:44:01.907833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.387 [2024-11-20 12:44:01.907992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.387 [2024-11-20 12:44:01.908001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.387 [2024-11-20 12:44:01.908007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.387 [2024-11-20 12:44:01.908014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.387 [2024-11-20 12:44:01.919851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.387 [2024-11-20 12:44:01.920277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-11-20 12:44:01.920294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-11-20 12:44:01.920302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.387 [2024-11-20 12:44:01.920471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.387 [2024-11-20 12:44:01.920631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.387 [2024-11-20 12:44:01.920640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.387 [2024-11-20 12:44:01.920646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.387 [2024-11-20 12:44:01.920652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.387 [2024-11-20 12:44:01.932495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.387 [2024-11-20 12:44:01.932909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-11-20 12:44:01.932926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-11-20 12:44:01.932933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.387 [2024-11-20 12:44:01.933091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.387 [2024-11-20 12:44:01.933250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.387 [2024-11-20 12:44:01.933259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.387 [2024-11-20 12:44:01.933265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.387 [2024-11-20 12:44:01.933271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.387 [2024-11-20 12:44:01.945066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.387 [2024-11-20 12:44:01.945486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-11-20 12:44:01.945503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:01.945511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:01.945669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:01.945828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:01.945837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:01.945843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:01.945849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:01.957681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:01.958104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:01.958121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:01.958129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:01.958287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:01.958451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:01.958464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:01.958471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:01.958478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:01.970309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:01.970723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:01.970741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:01.970749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:01.970907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:01.971066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:01.971075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:01.971081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:01.971087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:01.982941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:01.983352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:01.983369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:01.983376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:01.983539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:01.983699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:01.983708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:01.983714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:01.983720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:01.995480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:01.995884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:01.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:01.995908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:01.996065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:01.996224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:01.996232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:01.996238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:01.996244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:02.008033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:02.008421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:02.008439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:02.008447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:02.008602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:02.008757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:02.008766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:02.008772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:02.008778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:02.020598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:02.021013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:02.021029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:02.021037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:02.021195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:02.021355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:02.021364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:02.021371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:02.021377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:02.033105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:02.033458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:02.033475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:02.033483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:02.033637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:02.033792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:02.033801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:02.033807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:02.033813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:02.045646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:02.046056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:02.046076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:02.046084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:02.046242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:02.046401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:02.046415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:02.046424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:02.046431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:02.058155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:02.058587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-11-20 12:44:02.058605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.388 [2024-11-20 12:44:02.058612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.388 [2024-11-20 12:44:02.058771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.388 [2024-11-20 12:44:02.058929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.388 [2024-11-20 12:44:02.058939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.388 [2024-11-20 12:44:02.058944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.388 [2024-11-20 12:44:02.058950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.388 [2024-11-20 12:44:02.070771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.388 [2024-11-20 12:44:02.071175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-11-20 12:44:02.071191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.389 [2024-11-20 12:44:02.071198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.389 [2024-11-20 12:44:02.071356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.389 [2024-11-20 12:44:02.071521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.389 [2024-11-20 12:44:02.071530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.389 [2024-11-20 12:44:02.071536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.389 [2024-11-20 12:44:02.071543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.389 [2024-11-20 12:44:02.083346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.389 [2024-11-20 12:44:02.083760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-11-20 12:44:02.083778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.389 [2024-11-20 12:44:02.083784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.389 [2024-11-20 12:44:02.083947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.389 [2024-11-20 12:44:02.084105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.389 [2024-11-20 12:44:02.084114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.389 [2024-11-20 12:44:02.084120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.389 [2024-11-20 12:44:02.084127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.389 [2024-11-20 12:44:02.095946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.389 [2024-11-20 12:44:02.096331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-11-20 12:44:02.096349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.389 [2024-11-20 12:44:02.096357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.389 [2024-11-20 12:44:02.096521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.389 [2024-11-20 12:44:02.096680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.389 [2024-11-20 12:44:02.096689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.389 [2024-11-20 12:44:02.096695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.389 [2024-11-20 12:44:02.096702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.389 [2024-11-20 12:44:02.108710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.389 [2024-11-20 12:44:02.109139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-11-20 12:44:02.109155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.389 [2024-11-20 12:44:02.109162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.389 [2024-11-20 12:44:02.109320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.389 [2024-11-20 12:44:02.109484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.389 [2024-11-20 12:44:02.109494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.389 [2024-11-20 12:44:02.109500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.389 [2024-11-20 12:44:02.109507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.389 [2024-11-20 12:44:02.121483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.389 [2024-11-20 12:44:02.121888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-11-20 12:44:02.121906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.389 [2024-11-20 12:44:02.121913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.389 [2024-11-20 12:44:02.122071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.389 [2024-11-20 12:44:02.122229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.389 [2024-11-20 12:44:02.122242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.389 [2024-11-20 12:44:02.122248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.389 [2024-11-20 12:44:02.122254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.389 [2024-11-20 12:44:02.134171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.389 [2024-11-20 12:44:02.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-11-20 12:44:02.134564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.389 [2024-11-20 12:44:02.134571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.389 [2024-11-20 12:44:02.134730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.389 [2024-11-20 12:44:02.134889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.389 [2024-11-20 12:44:02.134899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.389 [2024-11-20 12:44:02.134905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.389 [2024-11-20 12:44:02.134911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.146993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.147417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.147434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.147441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.147599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.147758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.147767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.147774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.147780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.159619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.160042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.160060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.160067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.160225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.160384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.160393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.160399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.160405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.172344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.172758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.172775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.172782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.172941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.173100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.173109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.173116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.173122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.185050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.185433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.185450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.185458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.185616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.185775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.185784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.185790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.185796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.197724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.198066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.198083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.198091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.198250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.198408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.198421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.198427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.198433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.210484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.210831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.210853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.210861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.211015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.211169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.211178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.211185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.211191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 11084.33 IOPS, 43.30 MiB/s [2024-11-20T11:44:02.414Z] [2024-11-20 12:44:02.223162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.223574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.223592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.223600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.223758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.223917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.223926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.223933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.223939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.235964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.650 [2024-11-20 12:44:02.236303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.650 [2024-11-20 12:44:02.236320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.650 [2024-11-20 12:44:02.236327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.650 [2024-11-20 12:44:02.236490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.650 [2024-11-20 12:44:02.236649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.650 [2024-11-20 12:44:02.236658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.650 [2024-11-20 12:44:02.236664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.650 [2024-11-20 12:44:02.236670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.650 [2024-11-20 12:44:02.248495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.248908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.248925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.248932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.249094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.249253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.249262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.249268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.249275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.261044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.261455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.261472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.261480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.261639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.261798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.261807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.261814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.261820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.273680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.274097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.274114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.274121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.274280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.274442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.274453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.274459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.274465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.286422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.286782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.286799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.286806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.286965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.287123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.287136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.287142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.287149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.299091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.299500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.299517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.299525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.299684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.299843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.299852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.299858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.299864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.311851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.312195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.312212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.312219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.312378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.312543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.312553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.312560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.312567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.324550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.325797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.325822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.325832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.326000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.326175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.326184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.326190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.326200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.337330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.337636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.337656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.337664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.337824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.337984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.337994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.338000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.338006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.349894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.350312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.350357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.350382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.350917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.351078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.351088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.351094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.351100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.651 [2024-11-20 12:44:02.362713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.651 [2024-11-20 12:44:02.363100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.651 [2024-11-20 12:44:02.363118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.651 [2024-11-20 12:44:02.363125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.651 [2024-11-20 12:44:02.363284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.651 [2024-11-20 12:44:02.363448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.651 [2024-11-20 12:44:02.363459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.651 [2024-11-20 12:44:02.363465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.651 [2024-11-20 12:44:02.363472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.652 [2024-11-20 12:44:02.375424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.652 [2024-11-20 12:44:02.375764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.652 [2024-11-20 12:44:02.375784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.652 [2024-11-20 12:44:02.375792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.652 [2024-11-20 12:44:02.375950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.652 [2024-11-20 12:44:02.376108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.652 [2024-11-20 12:44:02.376118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.652 [2024-11-20 12:44:02.376124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.652 [2024-11-20 12:44:02.376130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.652 [2024-11-20 12:44:02.388061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.652 [2024-11-20 12:44:02.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.652 [2024-11-20 12:44:02.388486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.652 [2024-11-20 12:44:02.388494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.652 [2024-11-20 12:44:02.388652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.652 [2024-11-20 12:44:02.388811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.652 [2024-11-20 12:44:02.388820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.652 [2024-11-20 12:44:02.388826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.652 [2024-11-20 12:44:02.388832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.652 [2024-11-20 12:44:02.400786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.652 [2024-11-20 12:44:02.401669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.652 [2024-11-20 12:44:02.401692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.652 [2024-11-20 12:44:02.401700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.652 [2024-11-20 12:44:02.401866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.652 [2024-11-20 12:44:02.402026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.652 [2024-11-20 12:44:02.402036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.652 [2024-11-20 12:44:02.402043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.652 [2024-11-20 12:44:02.402050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.912 [2024-11-20 12:44:02.413338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.912 [2024-11-20 12:44:02.413683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.912 [2024-11-20 12:44:02.413702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.912 [2024-11-20 12:44:02.413710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.912 [2024-11-20 12:44:02.413873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.414032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.414041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.414047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.414053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.425886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.426235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.426251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.426259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.426423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.426583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.426593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.426598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.426605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.438471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.438875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.438892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.438900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.439058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.439217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.439227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.439233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.439239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.451068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.451492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.451509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.451517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.451677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.451836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.451849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.451855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.451862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.463660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.464071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.464088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.464095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.464255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.464420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.464430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.464436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.464442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.476213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.476677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.476695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.476702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.476861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.477020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.477029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.477035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.477042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.488772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.489174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.489192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.489200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.489359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.489523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.489533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.489540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.489550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.501350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.501760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.501778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.501785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.501943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.502102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.502112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.502118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.502124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.513982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.514386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.514403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.514417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.514577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.913 [2024-11-20 12:44:02.514736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.913 [2024-11-20 12:44:02.514745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.913 [2024-11-20 12:44:02.514751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.913 [2024-11-20 12:44:02.514757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.913 [2024-11-20 12:44:02.526526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.913 [2024-11-20 12:44:02.526942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.913 [2024-11-20 12:44:02.526959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.913 [2024-11-20 12:44:02.526966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.913 [2024-11-20 12:44:02.527125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.527284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.527293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.527299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.527305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.539209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.539620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.539642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.539650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.539809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.539968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.539977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.539983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.539989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.551815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.552215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.552232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.552239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.552397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.552561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.552571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.552577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.552583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.564524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.564927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.564951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.565109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.565268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.565278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.565284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.565290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.577199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.577621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.577667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.577690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.578276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.578874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.578901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.578924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.578944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.589838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.590183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.590200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.590208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.590366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.590531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.590541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.590547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.590553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.602523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.602964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.602981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.602988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.603146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.603305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.603315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.603322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.603328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.615335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.615682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.615699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.615706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.615865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.616023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.616036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.616042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.616049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.627998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.628335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.628352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.628359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.628523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.628683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.628692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.628698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.628704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.640784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.641205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.641223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.641230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.914 [2024-11-20 12:44:02.641389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.914 [2024-11-20 12:44:02.641554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.914 [2024-11-20 12:44:02.641564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.914 [2024-11-20 12:44:02.641571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.914 [2024-11-20 12:44:02.641577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.914 [2024-11-20 12:44:02.653298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.914 [2024-11-20 12:44:02.653667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.914 [2024-11-20 12:44:02.653713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.914 [2024-11-20 12:44:02.653738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.915 [2024-11-20 12:44:02.654236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.915 [2024-11-20 12:44:02.654396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.915 [2024-11-20 12:44:02.654406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.915 [2024-11-20 12:44:02.654418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.915 [2024-11-20 12:44:02.654429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.915 [2024-11-20 12:44:02.665844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.915 [2024-11-20 12:44:02.666250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.915 [2024-11-20 12:44:02.666268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:56.915 [2024-11-20 12:44:02.666275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:56.915 [2024-11-20 12:44:02.666438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:56.915 [2024-11-20 12:44:02.666598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.915 [2024-11-20 12:44:02.666608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.915 [2024-11-20 12:44:02.666615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.915 [2024-11-20 12:44:02.666621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.678640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.679056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.679073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.679081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.679239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.679397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.679407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.679419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.679425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.691221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.691623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.691641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.691649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.691807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.691966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.691975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.691981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.691988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.703837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.704236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.704256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.704264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.704427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.704586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.704596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.704602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.704608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.716446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.716852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.716870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.716878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.717037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.717195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.717205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.717211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.717217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.729122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.729437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.729471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.729479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.729638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.729797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.729806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.729812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.729819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.741739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.742047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.742081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.742090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.742253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.742417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.742427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.742433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.742440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.754261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.754695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.754712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.175 [2024-11-20 12:44:02.754719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.175 [2024-11-20 12:44:02.754879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.175 [2024-11-20 12:44:02.755037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.175 [2024-11-20 12:44:02.755046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.175 [2024-11-20 12:44:02.755053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.175 [2024-11-20 12:44:02.755060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.175 [2024-11-20 12:44:02.767054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.175 [2024-11-20 12:44:02.767393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-11-20 12:44:02.767414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.767422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.767580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.767740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.767749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.767755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.767761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.779635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.780045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.780063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.780070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.780229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.780388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.780400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.780407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.780420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.792172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.792597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.792615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.792622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.792781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.792940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.792949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.792956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.792962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.804731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.805108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.805125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.805133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.805288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.805463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.805473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.805479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.805486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.817300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.817703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.817721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.817728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.817887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.818046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.818055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.818062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.818072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.829923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.830333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.830350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.830357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.830523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.830683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.830692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.830699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.830705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.842565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.842903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.842921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.842929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.843089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.843246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.843255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.843262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.843268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.855159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.855599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.855616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.855624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.855782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.855941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.855950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.855956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.855962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.867968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.868305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.868325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.868333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.868503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.868662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.868672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.868678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.868684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.880750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.881166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-11-20 12:44:02.881183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.176 [2024-11-20 12:44:02.881191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.176 [2024-11-20 12:44:02.881349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.176 [2024-11-20 12:44:02.881515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.176 [2024-11-20 12:44:02.881526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.176 [2024-11-20 12:44:02.881532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.176 [2024-11-20 12:44:02.881538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.176 [2024-11-20 12:44:02.893305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.176 [2024-11-20 12:44:02.893697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-20 12:44:02.893714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.177 [2024-11-20 12:44:02.893721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.177 [2024-11-20 12:44:02.893875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.177 [2024-11-20 12:44:02.894030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.177 [2024-11-20 12:44:02.894039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.177 [2024-11-20 12:44:02.894045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.177 [2024-11-20 12:44:02.894051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.177 [2024-11-20 12:44:02.905844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.177 [2024-11-20 12:44:02.906253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-20 12:44:02.906270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.177 [2024-11-20 12:44:02.906278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.177 [2024-11-20 12:44:02.906445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.177 [2024-11-20 12:44:02.906605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.177 [2024-11-20 12:44:02.906615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.177 [2024-11-20 12:44:02.906621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.177 [2024-11-20 12:44:02.906627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.177 [2024-11-20 12:44:02.918457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.177 [2024-11-20 12:44:02.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-20 12:44:02.918886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.177 [2024-11-20 12:44:02.918893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.177 [2024-11-20 12:44:02.919051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.177 [2024-11-20 12:44:02.919210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.177 [2024-11-20 12:44:02.919219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.177 [2024-11-20 12:44:02.919226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.177 [2024-11-20 12:44:02.919232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.177 [2024-11-20 12:44:02.931141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.177 [2024-11-20 12:44:02.931470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-11-20 12:44:02.931488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.177 [2024-11-20 12:44:02.931495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.177 [2024-11-20 12:44:02.931655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.177 [2024-11-20 12:44:02.931838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.177 [2024-11-20 12:44:02.931848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.177 [2024-11-20 12:44:02.931855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.177 [2024-11-20 12:44:02.931861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.437 [2024-11-20 12:44:02.943873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.437 [2024-11-20 12:44:02.944289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 12:44:02.944307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.437 [2024-11-20 12:44:02.944314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.437 [2024-11-20 12:44:02.944478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:02.944638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:02.944651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:02.944657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:02.944663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:02.956512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:02.956922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:02.956939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:02.956947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:02.957107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:02.957266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:02.957276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:02.957282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:02.957288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:02.969116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:02.969529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:02.969546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:02.969554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:02.969718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:02.969878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:02.969887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:02.969894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:02.969900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:02.981781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:02.982189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:02.982206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:02.982214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:02.982373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:02.982540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:02.982551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:02.982557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:02.982569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:02.994285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:02.994701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:02.994718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:02.994727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:02.994885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:02.995044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:02.995053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:02.995060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:02.995065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:03.006817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:03.007244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:03.007260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:03.007268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:03.007433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:03.007594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:03.007603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:03.007610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:03.007616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:03.019424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:03.019742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:03.019758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:03.019765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:03.019920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:03.020075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:03.020084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:03.020091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:03.020098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:03.032040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:03.032446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:03.032466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:03.032474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:03.032632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:03.032791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:03.032800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:03.032807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:03.032813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:03.044620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:03.045006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:03.045023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:03.045030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:03.045188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:03.045347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:03.045356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:03.045362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:03.045369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:03.057240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:03.057651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:03.057668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:03.057675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.438 [2024-11-20 12:44:03.057833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.438 [2024-11-20 12:44:03.057992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.438 [2024-11-20 12:44:03.058001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.438 [2024-11-20 12:44:03.058007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.438 [2024-11-20 12:44:03.058014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.438 [2024-11-20 12:44:03.069779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.438 [2024-11-20 12:44:03.070186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 12:44:03.070202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.438 [2024-11-20 12:44:03.070210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.070371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.070537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.070547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.070553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.070559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.082322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.082711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.082728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.082735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.082889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.083043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.083053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.083059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.083065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.094930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.095331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.095348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.095355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.095520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.095680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.095689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.095695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.095702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.107461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.107835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.107852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.107859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.108017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.108177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.108189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.108195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.108203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.120221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.120560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.120577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.120585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.120744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.120903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.120912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.120918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.120924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.132977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.133388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.133405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.133418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.133577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.133735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.133745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.133751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.133757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.145597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.145993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.146010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.146018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.146176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.146335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.146344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.146350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.146360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.158111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.158451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.158521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.159013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.159173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.159182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.159188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.159195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.170715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.171117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.171133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.171141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.171299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.171464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.171474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.171480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.171487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.183293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.183697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.183714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.439 [2024-11-20 12:44:03.183721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.439 [2024-11-20 12:44:03.183879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.439 [2024-11-20 12:44:03.184038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.439 [2024-11-20 12:44:03.184047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.439 [2024-11-20 12:44:03.184053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.439 [2024-11-20 12:44:03.184059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.439 [2024-11-20 12:44:03.196048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.439 [2024-11-20 12:44:03.196416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 12:44:03.196436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.440 [2024-11-20 12:44:03.196444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.440 [2024-11-20 12:44:03.196603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.440 [2024-11-20 12:44:03.196762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.440 [2024-11-20 12:44:03.196772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.440 [2024-11-20 12:44:03.196779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.440 [2024-11-20 12:44:03.196785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.208743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.209149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.209166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.209173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.209332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.209497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.209507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.209513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.209519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 8313.25 IOPS, 32.47 MiB/s [2024-11-20T11:44:03.465Z] [2024-11-20 12:44:03.222409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.222794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.222811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.222818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.222977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.223136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.223146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.223152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.223158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.234941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.235252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.235269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.235276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.235441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.235597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.235606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.235612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.235618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.247563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.247971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.247989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.247997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.248156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.248315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.248324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.248330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.248336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.260114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.260510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.260528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.260536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.260695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.260854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.260863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.260869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.260875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.272700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.273099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.273116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.273123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.273282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.273447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.273461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.273468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.273474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.285292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.285699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.285716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.285724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.285882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.286040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.286050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.286056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.286063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.297870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.298278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.298295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.298302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.298467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.298626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.298635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.701 [2024-11-20 12:44:03.298641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.701 [2024-11-20 12:44:03.298647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.701 [2024-11-20 12:44:03.310508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.701 [2024-11-20 12:44:03.310925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.701 [2024-11-20 12:44:03.310942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.701 [2024-11-20 12:44:03.310949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.701 [2024-11-20 12:44:03.311108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.701 [2024-11-20 12:44:03.311267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.701 [2024-11-20 12:44:03.311277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.311283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.311293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.323079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.323487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.323505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.323512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.323670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.323829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.323838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.323844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.323850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.335615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.335998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.336015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.336022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.336180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.336338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.336347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.336354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.336360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.348276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.348667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.348684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.348692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.348850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.349008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.349017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.349024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.349030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.360945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.361360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.361377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.361385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.361548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.361708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.361718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.361724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.361731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.373772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.374204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.374221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.374228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.374386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.374550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.374560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.374567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.374574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.386503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.386920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.386944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.387102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.387260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.387269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.387275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.387281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.399132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.399510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.399528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.399536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.399694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.399848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.399857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.399863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.399869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.411825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.412173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.412190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.412198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.412366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.412545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.412555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.412561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.412567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.424439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.702 [2024-11-20 12:44:03.424846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.702 [2024-11-20 12:44:03.424863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.702 [2024-11-20 12:44:03.424870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.702 [2024-11-20 12:44:03.425030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.702 [2024-11-20 12:44:03.425189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.702 [2024-11-20 12:44:03.425198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.702 [2024-11-20 12:44:03.425205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.702 [2024-11-20 12:44:03.425211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.702 [2024-11-20 12:44:03.436965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.703 [2024-11-20 12:44:03.437344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.703 [2024-11-20 12:44:03.437361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.703 [2024-11-20 12:44:03.437368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.703 [2024-11-20 12:44:03.437533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.703 [2024-11-20 12:44:03.437692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.703 [2024-11-20 12:44:03.437704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.703 [2024-11-20 12:44:03.437710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.703 [2024-11-20 12:44:03.437717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.703 [2024-11-20 12:44:03.449568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.703 [2024-11-20 12:44:03.449893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.703 [2024-11-20 12:44:03.449910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.703 [2024-11-20 12:44:03.449918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.703 [2024-11-20 12:44:03.450072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.703 [2024-11-20 12:44:03.450227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.703 [2024-11-20 12:44:03.450236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.703 [2024-11-20 12:44:03.450243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.703 [2024-11-20 12:44:03.450248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.963 [2024-11-20 12:44:03.462384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.963 [2024-11-20 12:44:03.462794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.963 [2024-11-20 12:44:03.462812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.963 [2024-11-20 12:44:03.462820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.963 [2024-11-20 12:44:03.462979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.963 [2024-11-20 12:44:03.463138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.963 [2024-11-20 12:44:03.463147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.963 [2024-11-20 12:44:03.463154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.963 [2024-11-20 12:44:03.463160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.963 [2024-11-20 12:44:03.474948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.963 [2024-11-20 12:44:03.475285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.963 [2024-11-20 12:44:03.475302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.963 [2024-11-20 12:44:03.475310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.963 [2024-11-20 12:44:03.475475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.963 [2024-11-20 12:44:03.475634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.963 [2024-11-20 12:44:03.475644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.963 [2024-11-20 12:44:03.475650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.963 [2024-11-20 12:44:03.475659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.963 [2024-11-20 12:44:03.487510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.963 [2024-11-20 12:44:03.487912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.963 [2024-11-20 12:44:03.487929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.963 [2024-11-20 12:44:03.487937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.963 [2024-11-20 12:44:03.488095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.963 [2024-11-20 12:44:03.488254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.963 [2024-11-20 12:44:03.488264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.963 [2024-11-20 12:44:03.488270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.963 [2024-11-20 12:44:03.488276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.963 [2024-11-20 12:44:03.500014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.500397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.500427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.500587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.500745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.500755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.500761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.500767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.512676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.513099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.513123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.513282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.513448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.513458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.513464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.513471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.525324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.525712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.525733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.525740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.525898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.526057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.526066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.526072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.526078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.537864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.538272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.538290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.538297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.538462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.538621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.538631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.538637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.538644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.550497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.550908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.550953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.550977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.551408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.551812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.551830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.551846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.551860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.565162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.565686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.565708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.565718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.565976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.566230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.566243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.566252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.566262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.578222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.578656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.578673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.578681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.578853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.579025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.579035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.579042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.579049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.590768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.591175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.591192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.591199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.591357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.591523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.591532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.591538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.591544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.603307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.603720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.964 [2024-11-20 12:44:03.603738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.964 [2024-11-20 12:44:03.603746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.964 [2024-11-20 12:44:03.603905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.964 [2024-11-20 12:44:03.604064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.964 [2024-11-20 12:44:03.604078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.964 [2024-11-20 12:44:03.604084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.964 [2024-11-20 12:44:03.604090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.964 [2024-11-20 12:44:03.615931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.964 [2024-11-20 12:44:03.616386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.616403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.616416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.616575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.616734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.616744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.616750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.616757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.628740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.629077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.629097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.629104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.629264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.629430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.629441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.629448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.629454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.641313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.641726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.641744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.641751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.641910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.642070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.642079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.642086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.642097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.653919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.654330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.654348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.654356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.654521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.654682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.654691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.654697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.654703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.666601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.666911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.666929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.666937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.667095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.667254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.667264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.667270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.667276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.679205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.679611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.679630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.679637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.679796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.679955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.679965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.679972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.679977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.691914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.692323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.692344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.692352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.692515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.692675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.692684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.692691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.692697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.704598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.704987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.705005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.705012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.705169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.705328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.705338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.705344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.705350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.965 [2024-11-20 12:44:03.717310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.965 [2024-11-20 12:44:03.717660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.965 [2024-11-20 12:44:03.717705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:57.965 [2024-11-20 12:44:03.717729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:57.965 [2024-11-20 12:44:03.718308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:57.965 [2024-11-20 12:44:03.718694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.965 [2024-11-20 12:44:03.718704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.965 [2024-11-20 12:44:03.718711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.965 [2024-11-20 12:44:03.718718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.225 [2024-11-20 12:44:03.730124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.225 [2024-11-20 12:44:03.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-11-20 12:44:03.730472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.225 [2024-11-20 12:44:03.730480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.225 [2024-11-20 12:44:03.730643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.225 [2024-11-20 12:44:03.730802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.225 [2024-11-20 12:44:03.730811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.225 [2024-11-20 12:44:03.730818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.225 [2024-11-20 12:44:03.730824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.225 [2024-11-20 12:44:03.742977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.743252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.743269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.743277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.743443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.743603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.743612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.743619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.743624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.755681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.756725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.756748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.756757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.756923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.757083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.757093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.757099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.757106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.768512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.768935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.768953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.768961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.769120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.769279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.769292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.769299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.769305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.781228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.781572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.781590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.781598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.781756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.781915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.781925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.781931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.781937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.794007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.794353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.794370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.794377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.794543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.794702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.794712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.794718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.794724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.806791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.807098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.807115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.807123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.807281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.807451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.807462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.807469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.807478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.819544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.819885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.819901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.819909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.820067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.820226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.820236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.820243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.820249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.832317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.832660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.832677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.832685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.832844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.833003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.833012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.833018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.833025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.845198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.845491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.845518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.845689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.845860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.845869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.226 [2024-11-20 12:44:03.845875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.226 [2024-11-20 12:44:03.845881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.226 [2024-11-20 12:44:03.858028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.226 [2024-11-20 12:44:03.858369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-11-20 12:44:03.858390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.226 [2024-11-20 12:44:03.858398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.226 [2024-11-20 12:44:03.858579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.226 [2024-11-20 12:44:03.858753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.226 [2024-11-20 12:44:03.858762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.858769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.858776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.870870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.871287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.871305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.871313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.871493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.871666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.871676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.871683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.871690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.883733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.884073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.884090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.884098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.884256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.884450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.884461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.884468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.884474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.896594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.896988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.897005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.897012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.897174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.897333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.897343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.897349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.897355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.909416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.909666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.909684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.909691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.909850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.910009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.910018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.910025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.910031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.922231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.922601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.922619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.922627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.922785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.922944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.922953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.922959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.922965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.935013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.935434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.935451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.935459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.935618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.935777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.935789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.935795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.935801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.947857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.948119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.948136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.948143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.948301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.948467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.948477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.948484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.948490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.960676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.961038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.961055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.961063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.961222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.961380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.961390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.961396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.961402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.227 [2024-11-20 12:44:03.973464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.227 [2024-11-20 12:44:03.973857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-11-20 12:44:03.973874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.227 [2024-11-20 12:44:03.973881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.227 [2024-11-20 12:44:03.974039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.227 [2024-11-20 12:44:03.974197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.227 [2024-11-20 12:44:03.974207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.227 [2024-11-20 12:44:03.974213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.227 [2024-11-20 12:44:03.974223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:03.986280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:03.986686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:03.986702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:03.986709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:03.986867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:03.987025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:03.987033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:03.987040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:03.987046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:03.999106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:03.999445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:03.999461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:03.999468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:03.999626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:03.999785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:03.999793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:03.999799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:03.999805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.011777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.012188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.012204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.012210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.012369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.012532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.012541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.012546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.012552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.024420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.024845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.024863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.024871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.025029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.025187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.025195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.025201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.025207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.037085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.037468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.037485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.037492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.037651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.037810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.037818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.037824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.037830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.049698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.050035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.050051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.050058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.050215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.050373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.050381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.050387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.050393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.062230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.062655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.062671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.062678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.062839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.062997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.063005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.063010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.063016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.074880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.075292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.075308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.075315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.075478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.075636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.075644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.075650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.075656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.087557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.087960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.087976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.087982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.488 [2024-11-20 12:44:04.088141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.488 [2024-11-20 12:44:04.088298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.488 [2024-11-20 12:44:04.088306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.488 [2024-11-20 12:44:04.088312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.488 [2024-11-20 12:44:04.088318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.488 [2024-11-20 12:44:04.100220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.488 [2024-11-20 12:44:04.100627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.488 [2024-11-20 12:44:04.100644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.488 [2024-11-20 12:44:04.100651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.100810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.100969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.100979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.100985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.100991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.112972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.113356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.113371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.113378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.113542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.113700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.113708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.113713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.113719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.125642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.125995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.126011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.126017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.126175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.126333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.126341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.126347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.126353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.138431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.138822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.138838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.138844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.139002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.139160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.139168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.139174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.139183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.151157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.151575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.151591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.151598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.151756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.151914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.151922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.151928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.151934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.163825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.164236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.164251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.164258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.164421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.164578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.164586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.164592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.164597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.176480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.176818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.176833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.176840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.176999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.177157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.177165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.177171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.177177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.189098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.189435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.189454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.189461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.189618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.189776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.189783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.189789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.189795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.201654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.202052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.202067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.202074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.202232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.202390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.202398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.202404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.202409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 [2024-11-20 12:44:04.214263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.214602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.214618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.489 [2024-11-20 12:44:04.214624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.489 [2024-11-20 12:44:04.214782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.489 [2024-11-20 12:44:04.214940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.489 [2024-11-20 12:44:04.214948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.489 [2024-11-20 12:44:04.214954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.489 [2024-11-20 12:44:04.214960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.489 6650.60 IOPS, 25.98 MiB/s [2024-11-20T11:44:04.253Z] [2024-11-20 12:44:04.227094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.489 [2024-11-20 12:44:04.227498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.489 [2024-11-20 12:44:04.227514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.490 [2024-11-20 12:44:04.227520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.490 [2024-11-20 12:44:04.227682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.490 [2024-11-20 12:44:04.227840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.490 [2024-11-20 12:44:04.227848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.490 [2024-11-20 12:44:04.227853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.490 [2024-11-20 12:44:04.227859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.490 [2024-11-20 12:44:04.239664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.490 [2024-11-20 12:44:04.240004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.490 [2024-11-20 12:44:04.240020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.490 [2024-11-20 12:44:04.240027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.490 [2024-11-20 12:44:04.240185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.490 [2024-11-20 12:44:04.240341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.490 [2024-11-20 12:44:04.240349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.490 [2024-11-20 12:44:04.240355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.490 [2024-11-20 12:44:04.240361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.252250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.252661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.252677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.252684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.252841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.252999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.253007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.253013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.750 [2024-11-20 12:44:04.253018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.264866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.265269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.265284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.265291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.265453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.265612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.265623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.265629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.750 [2024-11-20 12:44:04.265635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.277448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.277780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.277796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.277802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.277961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.278118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.278125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.278131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.750 [2024-11-20 12:44:04.278137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.290064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.290468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.290484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.290490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.290648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.290806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.290814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.290820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.750 [2024-11-20 12:44:04.290826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.302614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.303021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.303036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.303043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.303200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.303358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.303365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.303371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.750 [2024-11-20 12:44:04.303380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.315249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.315649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.315665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.315672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.315829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.315987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.315995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.316000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.750 [2024-11-20 12:44:04.316006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.750 [2024-11-20 12:44:04.327804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.750 [2024-11-20 12:44:04.328206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.750 [2024-11-20 12:44:04.328222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.750 [2024-11-20 12:44:04.328228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.750 [2024-11-20 12:44:04.328386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.750 [2024-11-20 12:44:04.328549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.750 [2024-11-20 12:44:04.328558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.750 [2024-11-20 12:44:04.328563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.328569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.340426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.340803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.340819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.340825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.340983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.341141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.341149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.341154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.341160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.352953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.353397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.353418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.353425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.353582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.353739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.353746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.353753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.353758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.365621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.365963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.365979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.365986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.366144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.366301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.366309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.366315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.366321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.378213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.378645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.378661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.378668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.378826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.378984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.378992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.378998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.379004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.390994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.391394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.391409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.391422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.391582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.391741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.391748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.391754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.391760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.403665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.404024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.404040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.404047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.404204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.404361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.404368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.404374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.404380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.416238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.416641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.416657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.416663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.416821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.416978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.416985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.416991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.416997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.428821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.429201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.429216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.429223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.429380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.429543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.429554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.429560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.429566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.441448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.441828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.441843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.441850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.442008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.442165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.751 [2024-11-20 12:44:04.442173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.751 [2024-11-20 12:44:04.442179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.751 [2024-11-20 12:44:04.442184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.751 [2024-11-20 12:44:04.453969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.751 [2024-11-20 12:44:04.454358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.751 [2024-11-20 12:44:04.454374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.751 [2024-11-20 12:44:04.454380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.751 [2024-11-20 12:44:04.454544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.751 [2024-11-20 12:44:04.454702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.752 [2024-11-20 12:44:04.454710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.752 [2024-11-20 12:44:04.454716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.752 [2024-11-20 12:44:04.454721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.752 [2024-11-20 12:44:04.466606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.752 [2024-11-20 12:44:04.466923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.752 [2024-11-20 12:44:04.466938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.752 [2024-11-20 12:44:04.466945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.752 [2024-11-20 12:44:04.467102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.752 [2024-11-20 12:44:04.467259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.752 [2024-11-20 12:44:04.467267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.752 [2024-11-20 12:44:04.467273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.752 [2024-11-20 12:44:04.467281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.752 [2024-11-20 12:44:04.479208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.752 [2024-11-20 12:44:04.479609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.752 [2024-11-20 12:44:04.479624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.752 [2024-11-20 12:44:04.479631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.752 [2024-11-20 12:44:04.479788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.752 [2024-11-20 12:44:04.479945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.752 [2024-11-20 12:44:04.479953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.752 [2024-11-20 12:44:04.479959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.752 [2024-11-20 12:44:04.479964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.752 [2024-11-20 12:44:04.491818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.752 [2024-11-20 12:44:04.492215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.752 [2024-11-20 12:44:04.492230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.752 [2024-11-20 12:44:04.492237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.752 [2024-11-20 12:44:04.492394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.752 [2024-11-20 12:44:04.492558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.752 [2024-11-20 12:44:04.492567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.752 [2024-11-20 12:44:04.492573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.752 [2024-11-20 12:44:04.492578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.752 [2024-11-20 12:44:04.504336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.752 [2024-11-20 12:44:04.504750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.752 [2024-11-20 12:44:04.504766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:58.752 [2024-11-20 12:44:04.504773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:58.752 [2024-11-20 12:44:04.504930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:58.752 [2024-11-20 12:44:04.505088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.752 [2024-11-20 12:44:04.505095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.752 [2024-11-20 12:44:04.505101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.752 [2024-11-20 12:44:04.505107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.517145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.517567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.517583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.517589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.517746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.517904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.517912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.517917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.517923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.529654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.530057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.530073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.530080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.530238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.530395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.530402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.530408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.530420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.542274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.542701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.542717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.542724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.542881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.543039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.543046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.543052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.543057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.554911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.555317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.555332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.555339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.555505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.555664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.555671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.555678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.555684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.567539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.567940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.567956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.567962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.568119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.568277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.568284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.568290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.568295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.580110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.580514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.580531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.580538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.580695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.580853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.580861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.580867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.580873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.592629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.593041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.593085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.593108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.593701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.593925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.593935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.593941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.593947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.605478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.605813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.605829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.605836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.605993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.606150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.606158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.606164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.606170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.618137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.618516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.618531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.618538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.013 [2024-11-20 12:44:04.618695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.013 [2024-11-20 12:44:04.618852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.013 [2024-11-20 12:44:04.618859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.013 [2024-11-20 12:44:04.618865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.013 [2024-11-20 12:44:04.618871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.013 [2024-11-20 12:44:04.630826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.013 [2024-11-20 12:44:04.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.013 [2024-11-20 12:44:04.631289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.013 [2024-11-20 12:44:04.631296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.631459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.631617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.631625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.631631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.631640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.643348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.643780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.643796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.643803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.643961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.644118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.644126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.644132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.644137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.655884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.656184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.656207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.656361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.656539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.656547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.656553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.656559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.668456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.668857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.668873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.668880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.669036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.669193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.669201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.669207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.669212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.681113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.681520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.681536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.681543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.681700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.681858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.681866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.681872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.681877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.693631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.694010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.694026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.694033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.694191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.694348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.694355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.694361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.694367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.706231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.706651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.706667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.706673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.706831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.706989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.706997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.707003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.707009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.718823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.719149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.719165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.719172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.719328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.719505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.719513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.719519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.719524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.731424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.731758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.731773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.731780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.731937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.732095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.732102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.732108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.732114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.744057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.744380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.744395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.014 [2024-11-20 12:44:04.744402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.014 [2024-11-20 12:44:04.744567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.014 [2024-11-20 12:44:04.744725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.014 [2024-11-20 12:44:04.744733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.014 [2024-11-20 12:44:04.744739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.014 [2024-11-20 12:44:04.744744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.014 [2024-11-20 12:44:04.756623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.014 [2024-11-20 12:44:04.757015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.014 [2024-11-20 12:44:04.757059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.015 [2024-11-20 12:44:04.757081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.015 [2024-11-20 12:44:04.757675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.015 [2024-11-20 12:44:04.758256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.015 [2024-11-20 12:44:04.758288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.015 [2024-11-20 12:44:04.758308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.015 [2024-11-20 12:44:04.758314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.015 [2024-11-20 12:44:04.769385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.015 [2024-11-20 12:44:04.769721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.015 [2024-11-20 12:44:04.769737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.015 [2024-11-20 12:44:04.769743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.015 [2024-11-20 12:44:04.769901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.015 [2024-11-20 12:44:04.770058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.015 [2024-11-20 12:44:04.770066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.015 [2024-11-20 12:44:04.770071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.015 [2024-11-20 12:44:04.770077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.276 [2024-11-20 12:44:04.782024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.276 [2024-11-20 12:44:04.782432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.276 [2024-11-20 12:44:04.782448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.276 [2024-11-20 12:44:04.782455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.276 [2024-11-20 12:44:04.782613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.276 [2024-11-20 12:44:04.782770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.276 [2024-11-20 12:44:04.782777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.276 [2024-11-20 12:44:04.782783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.276 [2024-11-20 12:44:04.782789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.276 [2024-11-20 12:44:04.794743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.276 [2024-11-20 12:44:04.795079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.276 [2024-11-20 12:44:04.795095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.276 [2024-11-20 12:44:04.795102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.276 [2024-11-20 12:44:04.795259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.276 [2024-11-20 12:44:04.795423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.276 [2024-11-20 12:44:04.795431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.276 [2024-11-20 12:44:04.795437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.276 [2024-11-20 12:44:04.795447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.276 [2024-11-20 12:44:04.807486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.276 [2024-11-20 12:44:04.807809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.276 [2024-11-20 12:44:04.807824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.276 [2024-11-20 12:44:04.807831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.276 [2024-11-20 12:44:04.807989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.276 [2024-11-20 12:44:04.808149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.276 [2024-11-20 12:44:04.808157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.276 [2024-11-20 12:44:04.808164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.276 [2024-11-20 12:44:04.808169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.276 [2024-11-20 12:44:04.820232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.276 [2024-11-20 12:44:04.820642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.276 [2024-11-20 12:44:04.820658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.276 [2024-11-20 12:44:04.820665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.276 [2024-11-20 12:44:04.820823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.276 [2024-11-20 12:44:04.820980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.276 [2024-11-20 12:44:04.820987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.276 [2024-11-20 12:44:04.820993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.276 [2024-11-20 12:44:04.820999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1098845 Killed "${NVMF_APP[@]}" "$@" 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.277 [2024-11-20 12:44:04.833017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.833338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.833355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.833362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.833523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.833681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.833693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.833699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.833705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1100405 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1100405 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1100405 ']' 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.277 12:44:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.277 [2024-11-20 12:44:04.845760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.846110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.846126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.846133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.846291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.846453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.846461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.846467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.846472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 [2024-11-20 12:44:04.858517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.858841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.858857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.858864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.859022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.859181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.859189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.859195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.859200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 [2024-11-20 12:44:04.871252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.871687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.871703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.871710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.871868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.872024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.872032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.872038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.872044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 [2024-11-20 12:44:04.883969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.884274] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:59.277 [2024-11-20 12:44:04.884312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.277 [2024-11-20 12:44:04.884317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.884332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.884339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.884502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.884660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.884667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.884673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.884680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 [2024-11-20 12:44:04.896732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.896981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.896997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.897003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.897162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.897320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.897327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.897334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.897339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 [2024-11-20 12:44:04.909505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.909911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.909927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.909934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.910092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.910249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.910257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.910263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.910269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.277 [2024-11-20 12:44:04.922323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.277 [2024-11-20 12:44:04.922709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.277 [2024-11-20 12:44:04.922725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.277 [2024-11-20 12:44:04.922732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.277 [2024-11-20 12:44:04.922891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.277 [2024-11-20 12:44:04.923049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.277 [2024-11-20 12:44:04.923057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.277 [2024-11-20 12:44:04.923063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.277 [2024-11-20 12:44:04.923069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:04.935144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:04.935554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:04.935570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:04.935578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:04.935736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:04.935894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:04.935901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:04.935907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:04.935913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:04.947899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:04.948320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:04.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:04.948349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:04.948511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:04.948669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:04.948677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:04.948683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:04.948688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:04.960625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:04.961044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:04.961059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:04.961066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:04.961224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:04.961382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:04.961390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:04.961395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:04.961401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:04.963136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:59.278 [2024-11-20 12:44:04.973337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:04.973758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:04.973775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:04.973783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:04.973943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:04.974102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:04.974110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:04.974118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:04.974124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:04.986057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:04.986378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:04.986395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:04.986403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:04.986572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:04.986731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:04.986739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:04.986745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:04.986750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:04.998822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:04.999215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:04.999230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:04.999237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:04.999395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:04.999558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:04.999566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:04.999572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:04.999578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:05.002658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.278 [2024-11-20 12:44:05.002683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.278 [2024-11-20 12:44:05.002690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.278 [2024-11-20 12:44:05.002695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.278 [2024-11-20 12:44:05.002700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.278 [2024-11-20 12:44:05.004072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.278 [2024-11-20 12:44:05.004183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.278 [2024-11-20 12:44:05.004184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.278 [2024-11-20 12:44:05.011590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:05.012070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:05.012089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:05.012098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:05.012258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:05.012421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:05.012429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:05.012437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:05.012443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.278 [2024-11-20 12:44:05.024345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.278 [2024-11-20 12:44:05.024789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.278 [2024-11-20 12:44:05.024809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.278 [2024-11-20 12:44:05.024817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.278 [2024-11-20 12:44:05.024977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.278 [2024-11-20 12:44:05.025136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.278 [2024-11-20 12:44:05.025145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.278 [2024-11-20 12:44:05.025153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.278 [2024-11-20 12:44:05.025159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.539 [2024-11-20 12:44:05.037199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.539 [2024-11-20 12:44:05.037664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.539 [2024-11-20 12:44:05.037683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.539 [2024-11-20 12:44:05.037692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.539 [2024-11-20 12:44:05.037851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.539 [2024-11-20 12:44:05.038011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.539 [2024-11-20 12:44:05.038019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.539 [2024-11-20 12:44:05.038026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.539 [2024-11-20 12:44:05.038033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.539 [2024-11-20 12:44:05.049921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.539 [2024-11-20 12:44:05.050352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.539 [2024-11-20 12:44:05.050370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.539 [2024-11-20 12:44:05.050378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.539 [2024-11-20 12:44:05.050544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.539 [2024-11-20 12:44:05.050704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.539 [2024-11-20 12:44:05.050712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.539 [2024-11-20 12:44:05.050719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.539 [2024-11-20 12:44:05.050726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.539 [2024-11-20 12:44:05.062760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.539 [2024-11-20 12:44:05.063133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.539 [2024-11-20 12:44:05.063156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.063164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.063323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.063487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.063495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.063502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.063509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.075564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.075970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.075987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.075994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.076164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.076323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.076331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.076337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.076344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.088382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.088716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.088731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.088739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.088897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.089055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.089063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.089069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.089075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.101113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.101448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.101464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.101471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.101633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.101792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.101800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.101806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.101813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.113852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.114246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.114262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.114269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.114432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.114592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.114600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.114607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.114613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.126643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.127077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.127084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.127242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.127402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.127414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.127421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.127428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.139457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.139848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.139863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.139870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.140028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.140188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.140196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.140206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.140213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.152251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.152643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.152659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.152666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.152824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.152983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.152991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.152997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.153002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.165034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.165432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.165448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.165455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.165613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.165770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.165778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.165784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.165790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.177839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.178258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.540 [2024-11-20 12:44:05.178274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.540 [2024-11-20 12:44:05.178280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.540 [2024-11-20 12:44:05.178444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.540 [2024-11-20 12:44:05.178603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.540 [2024-11-20 12:44:05.178611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.540 [2024-11-20 12:44:05.178617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.540 [2024-11-20 12:44:05.178623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.540 [2024-11-20 12:44:05.190652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.540 [2024-11-20 12:44:05.191066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.191083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.191089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.191247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.191405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.191417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.191423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.191429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.203476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.203877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.203892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.203900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.204057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.204217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.204226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.204232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.204237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.216300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.216723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.216739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.216746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.216903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.217061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.217068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.217074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.217080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 5542.17 IOPS, 21.65 MiB/s [2024-11-20T11:44:05.305Z] [2024-11-20 12:44:05.230142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.230466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.230485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.230492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.230650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.230808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.230816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.230823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.230829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.242879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.243219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.243235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.243242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.243400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.243562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.243571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.243577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.243582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.255639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.256058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.256075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.256081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.256239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.256397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.256404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.256416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.256422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.268472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.268780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.268796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.268802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.268964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.269121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.269129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.269135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.269140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.281203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.281652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.281669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.281676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.281834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.281991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.281999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.282005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.282011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.541 [2024-11-20 12:44:05.294058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.541 [2024-11-20 12:44:05.294401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.541 [2024-11-20 12:44:05.294422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.541 [2024-11-20 12:44:05.294429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.541 [2024-11-20 12:44:05.294588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.541 [2024-11-20 12:44:05.294746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.541 [2024-11-20 12:44:05.294754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.541 [2024-11-20 12:44:05.294760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.541 [2024-11-20 12:44:05.294765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.802 [2024-11-20 12:44:05.306801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.802 [2024-11-20 12:44:05.307189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.802 [2024-11-20 12:44:05.307204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.802 [2024-11-20 12:44:05.307211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.802 [2024-11-20 12:44:05.307368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.802 [2024-11-20 12:44:05.307531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.802 [2024-11-20 12:44:05.307543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.802 [2024-11-20 12:44:05.307549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.802 [2024-11-20 12:44:05.307555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.802 [2024-11-20 12:44:05.319601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.802 [2024-11-20 12:44:05.320019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.802 [2024-11-20 12:44:05.320035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.802 [2024-11-20 12:44:05.320042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.802 [2024-11-20 12:44:05.320201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.802 [2024-11-20 12:44:05.320359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.802 [2024-11-20 12:44:05.320367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.802 [2024-11-20 12:44:05.320372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.802 [2024-11-20 12:44:05.320378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.802 [2024-11-20 12:44:05.332357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.802 [2024-11-20 12:44:05.332673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.802 [2024-11-20 12:44:05.332689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.802 [2024-11-20 12:44:05.332696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.802 [2024-11-20 12:44:05.332854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.802 [2024-11-20 12:44:05.333013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.802 [2024-11-20 12:44:05.333021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.802 [2024-11-20 12:44:05.333026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.802 [2024-11-20 12:44:05.333032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.802 [2024-11-20 12:44:05.345068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.802 [2024-11-20 12:44:05.345415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.802 [2024-11-20 12:44:05.345431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.802 [2024-11-20 12:44:05.345438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.802 [2024-11-20 12:44:05.345596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.802 [2024-11-20 12:44:05.345753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.802 [2024-11-20 12:44:05.345761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.802 [2024-11-20 12:44:05.345766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.802 [2024-11-20 12:44:05.345772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.802 [2024-11-20 12:44:05.357833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.802 [2024-11-20 12:44:05.358119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.802 [2024-11-20 12:44:05.358134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.802 [2024-11-20 12:44:05.358141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.802 [2024-11-20 12:44:05.358298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.802 [2024-11-20 12:44:05.358461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.802 [2024-11-20 12:44:05.358469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.802 [2024-11-20 12:44:05.358475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.802 [2024-11-20 12:44:05.358481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.802 [2024-11-20 12:44:05.370682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.802 [2024-11-20 12:44:05.371095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.802 [2024-11-20 12:44:05.371111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.802 [2024-11-20 12:44:05.371118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.802 [2024-11-20 12:44:05.371277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.371439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.371447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.371454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.371460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.383488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.383801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.383816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.383823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.383980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.384137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.384145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.384151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.384156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.396206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.396648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.396668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.396675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.396833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.396994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.397002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.397008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.397014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.408940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.409209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.409224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.409231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.409389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.409554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.409562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.409568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.409574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.421766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.422106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.422123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.422130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.422288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.422450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.422459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.422464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.422470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.434511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.434929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.434945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.434951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.435112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.435270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.435278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.435284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.435290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.447333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.447632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.447649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.447656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.447814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.447973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.447981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.447987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.447992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.460200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.460640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.460656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.460663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.460820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.460977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.460984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.460990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.460996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.473045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.473459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.473476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.473482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.473641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.473799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.473810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.473817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.473822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.485874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.486266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.486282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.486289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.486451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.803 [2024-11-20 12:44:05.486610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.803 [2024-11-20 12:44:05.486618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.803 [2024-11-20 12:44:05.486623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.803 [2024-11-20 12:44:05.486629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.803 [2024-11-20 12:44:05.498673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.803 [2024-11-20 12:44:05.499088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.803 [2024-11-20 12:44:05.499104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.803 [2024-11-20 12:44:05.499111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.803 [2024-11-20 12:44:05.499269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.804 [2024-11-20 12:44:05.499433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.804 [2024-11-20 12:44:05.499441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.804 [2024-11-20 12:44:05.499447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.804 [2024-11-20 12:44:05.499453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.804 [2024-11-20 12:44:05.511498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.804 [2024-11-20 12:44:05.511919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.804 [2024-11-20 12:44:05.511935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.804 [2024-11-20 12:44:05.511941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.804 [2024-11-20 12:44:05.512099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.804 [2024-11-20 12:44:05.512257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.804 [2024-11-20 12:44:05.512265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.804 [2024-11-20 12:44:05.512271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.804 [2024-11-20 12:44:05.512277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.804 [2024-11-20 12:44:05.524321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.804 [2024-11-20 12:44:05.524740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.804 [2024-11-20 12:44:05.524756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.804 [2024-11-20 12:44:05.524763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.804 [2024-11-20 12:44:05.524920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.804 [2024-11-20 12:44:05.525078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.804 [2024-11-20 12:44:05.525086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.804 [2024-11-20 12:44:05.525092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.804 [2024-11-20 12:44:05.525098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.804 [2024-11-20 12:44:05.537139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.804 [2024-11-20 12:44:05.537555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.804 [2024-11-20 12:44:05.537571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.804 [2024-11-20 12:44:05.537578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.804 [2024-11-20 12:44:05.537736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.804 [2024-11-20 12:44:05.537894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.804 [2024-11-20 12:44:05.537902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.804 [2024-11-20 12:44:05.537909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.804 [2024-11-20 12:44:05.537915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.804 [2024-11-20 12:44:05.549979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.804 [2024-11-20 12:44:05.550375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.804 [2024-11-20 12:44:05.550391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:29:59.804 [2024-11-20 12:44:05.550398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:29:59.804 [2024-11-20 12:44:05.550563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:29:59.804 [2024-11-20 12:44:05.550720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.804 [2024-11-20 12:44:05.550728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.804 [2024-11-20 12:44:05.550734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.804 [2024-11-20 12:44:05.550740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.562784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.563200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.563222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.563229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.563386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.563549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.563557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.563563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.563570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.575624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.576034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.576051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.576058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.576216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.576373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.576382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.576388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.576393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.588448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.588836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.588853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.588859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.589016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.589174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.589182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.589188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.589194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.601193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.601521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.601537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.601543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.601704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.601861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.601869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.601875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.601880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.613927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.614347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.614363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.614371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.614534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.614693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.614701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.614708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.614714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.626753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.627096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.627113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.627120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.627278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.627441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.627451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.627458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.627463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.639544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.639849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.639866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.639873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.640031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.640189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.065 [2024-11-20 12:44:05.640202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.065 [2024-11-20 12:44:05.640209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.065 [2024-11-20 12:44:05.640215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.065 [2024-11-20 12:44:05.652275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.065 [2024-11-20 12:44:05.652648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.065 [2024-11-20 12:44:05.652664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.065 [2024-11-20 12:44:05.652671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.065 [2024-11-20 12:44:05.652829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.065 [2024-11-20 12:44:05.652987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.652995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.653002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.653009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 [2024-11-20 12:44:05.665069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.665494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.665510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.665518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.665676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.665835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.665843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.665851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.665858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 [2024-11-20 12:44:05.677924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.678342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.678358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.678365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.678528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.678687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.678695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.678700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.678706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 [2024-11-20 12:44:05.690753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.691136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.691152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.691159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.691317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.691479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.691488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.691495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.691500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 [2024-11-20 12:44:05.703531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.703863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.703879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.703886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.704044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.704203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.704211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.704217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.704222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.066 [2024-11-20 12:44:05.716263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.716599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.716616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.716623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.716781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.716940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.716948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.716954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.716963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 [2024-11-20 12:44:05.729007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.729346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.729362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.729368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.729530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.729689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.729697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.729703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.729709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 [2024-11-20 12:44:05.741752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.742170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.742186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.742193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.742351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.742514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.742523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.742529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.742535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.066 [2024-11-20 12:44:05.752395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.066 [2024-11-20 12:44:05.754570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.754986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.755002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.066 [2024-11-20 12:44:05.755009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.066 [2024-11-20 12:44:05.755166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.066 [2024-11-20 12:44:05.755324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.066 [2024-11-20 12:44:05.755334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.066 [2024-11-20 12:44:05.755341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.066 [2024-11-20 12:44:05.755346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.066 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.066 [2024-11-20 12:44:05.767380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.066 [2024-11-20 12:44:05.767678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.066 [2024-11-20 12:44:05.767694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.067 [2024-11-20 12:44:05.767700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.067 [2024-11-20 12:44:05.767858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.067 [2024-11-20 12:44:05.768016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.067 [2024-11-20 12:44:05.768024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.067 [2024-11-20 12:44:05.768030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.067 [2024-11-20 12:44:05.768035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.067 [2024-11-20 12:44:05.780232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.067 [2024-11-20 12:44:05.780627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.067 [2024-11-20 12:44:05.780643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.067 [2024-11-20 12:44:05.780650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.067 [2024-11-20 12:44:05.780807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.067 [2024-11-20 12:44:05.780964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.067 [2024-11-20 12:44:05.780972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.067 [2024-11-20 12:44:05.780978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.067 [2024-11-20 12:44:05.780984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.067 [2024-11-20 12:44:05.793021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.067 [2024-11-20 12:44:05.793362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.067 [2024-11-20 12:44:05.793378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.067 [2024-11-20 12:44:05.793385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.067 [2024-11-20 12:44:05.793548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.067 [2024-11-20 12:44:05.793711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.067 [2024-11-20 12:44:05.793718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.067 [2024-11-20 12:44:05.793724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.067 [2024-11-20 12:44:05.793730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.067 Malloc0 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.067 [2024-11-20 12:44:05.805761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.067 [2024-11-20 12:44:05.806100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.067 [2024-11-20 12:44:05.806116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.067 [2024-11-20 12:44:05.806123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.067 [2024-11-20 12:44:05.806281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.067 [2024-11-20 12:44:05.806444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.067 [2024-11-20 12:44:05.806452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.067 [2024-11-20 12:44:05.806458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.067 [2024-11-20 12:44:05.806464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.067 [2024-11-20 12:44:05.818500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.067 [2024-11-20 12:44:05.818841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.067 [2024-11-20 12:44:05.818858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2084940 with addr=10.0.0.2, port=4420 00:30:00.067 [2024-11-20 12:44:05.818865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084940 is same with the state(6) to be set 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.067 [2024-11-20 12:44:05.819023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2084940 (9): Bad file descriptor 00:30:00.067 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.067 [2024-11-20 12:44:05.819180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.067 [2024-11-20 12:44:05.819190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.067 [2024-11-20 12:44:05.819203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.067 [2024-11-20 12:44:05.819209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.067 [2024-11-20 12:44:05.821611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.326 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.326 12:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1099352 00:30:00.326 [2024-11-20 12:44:05.831250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.326 [2024-11-20 12:44:05.858302] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:01.540 5409.71 IOPS, 21.13 MiB/s [2024-11-20T11:44:08.281Z] 6305.75 IOPS, 24.63 MiB/s [2024-11-20T11:44:09.659Z] 7013.78 IOPS, 27.40 MiB/s [2024-11-20T11:44:10.597Z] 7565.80 IOPS, 29.55 MiB/s [2024-11-20T11:44:11.533Z] 8015.00 IOPS, 31.31 MiB/s [2024-11-20T11:44:12.470Z] 8390.67 IOPS, 32.78 MiB/s [2024-11-20T11:44:13.407Z] 8737.31 IOPS, 34.13 MiB/s [2024-11-20T11:44:14.342Z] 9010.79 IOPS, 35.20 MiB/s 00:30:08.578 Latency(us) 00:30:08.578 [2024-11-20T11:44:14.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.578 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.578 Verification LBA range: start 0x0 length 0x4000 00:30:08.578 Nvme1n1 : 15.00 9251.74 36.14 14382.07 0.00 5398.63 392.84 21209.83 00:30:08.578 [2024-11-20T11:44:14.342Z] =================================================================================================================== 00:30:08.578 [2024-11-20T11:44:14.342Z] Total : 9251.74 36.14 14382.07 0.00 5398.63 392.84 21209.83 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.837 rmmod nvme_tcp 00:30:08.837 rmmod nvme_fabrics 00:30:08.837 rmmod nvme_keyring 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1100405 ']' 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1100405 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1100405 ']' 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1100405 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100405 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100405' 00:30:08.837 killing process with pid 1100405 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1100405 00:30:08.837 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1100405 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.097 12:44:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.632 00:30:11.632 real 0m26.904s 00:30:11.632 user 1m3.324s 00:30:11.632 sys 0m6.703s 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:11.632 ************************************ 00:30:11.632 END TEST nvmf_bdevperf 00:30:11.632 ************************************ 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.632 ************************************ 00:30:11.632 START TEST nvmf_target_disconnect 00:30:11.632 ************************************ 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:11.632 * Looking for test storage... 00:30:11.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:11.632 12:44:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.632 --rc genhtml_branch_coverage=1 00:30:11.632 --rc genhtml_function_coverage=1 00:30:11.632 --rc genhtml_legend=1 00:30:11.632 --rc geninfo_all_blocks=1 00:30:11.632 --rc geninfo_unexecuted_blocks=1 00:30:11.632 00:30:11.632 ' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.632 --rc genhtml_branch_coverage=1 00:30:11.632 --rc genhtml_function_coverage=1 00:30:11.632 --rc genhtml_legend=1 00:30:11.632 --rc geninfo_all_blocks=1 00:30:11.632 --rc geninfo_unexecuted_blocks=1 00:30:11.632 00:30:11.632 ' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.632 --rc genhtml_branch_coverage=1 00:30:11.632 --rc genhtml_function_coverage=1 00:30:11.632 --rc genhtml_legend=1 00:30:11.632 --rc geninfo_all_blocks=1 00:30:11.632 --rc geninfo_unexecuted_blocks=1 00:30:11.632 00:30:11.632 ' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.632 --rc genhtml_branch_coverage=1 00:30:11.632 --rc genhtml_function_coverage=1 00:30:11.632 --rc genhtml_legend=1 00:30:11.632 --rc geninfo_all_blocks=1 00:30:11.632 --rc geninfo_unexecuted_blocks=1 00:30:11.632 00:30:11.632 ' 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.632 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.633 12:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:30:18.204 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:30:18.204 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:30:18.204 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:30:18.205 Found net devices under 0000:1a:00.0: cvl_0_0 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:30:18.205 Found net devices under 0000:1a:00.1: cvl_0_1 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.205 12:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:30:18.205 00:30:18.205 --- 10.0.0.2 ping statistics --- 00:30:18.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.205 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:30:18.205 00:30:18.205 --- 10.0.0.1 ping statistics --- 00:30:18.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.205 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.205 ************************************ 00:30:18.205 START TEST nvmf_target_disconnect_tc1 00:30:18.205 ************************************ 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.205 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.206 [2024-11-20 12:44:23.355641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.206 [2024-11-20 12:44:23.355688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1727f00 with addr=10.0.0.2, port=4420 00:30:18.206 [2024-11-20 12:44:23.355710] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:18.206 [2024-11-20 12:44:23.355723] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:18.206 [2024-11-20 12:44:23.355730] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:18.206 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:18.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:18.206 Initializing NVMe Controllers 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:18.206 00:30:18.206 real 0m0.121s 00:30:18.206 user 0m0.050s 00:30:18.206 sys 0m0.070s 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:18.206 ************************************ 00:30:18.206 END TEST nvmf_target_disconnect_tc1 00:30:18.206 ************************************ 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.206 ************************************ 00:30:18.206 START TEST nvmf_target_disconnect_tc2 00:30:18.206 ************************************ 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1105801 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1105801 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1105801 ']' 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.206 12:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.206 [2024-11-20 12:44:23.492303] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:30:18.206 [2024-11-20 12:44:23.492345] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.206 [2024-11-20 12:44:23.571561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.206 [2024-11-20 12:44:23.612611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.206 [2024-11-20 12:44:23.612641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.206 [2024-11-20 12:44:23.612649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.206 [2024-11-20 12:44:23.612654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.206 [2024-11-20 12:44:23.612659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.206 [2024-11-20 12:44:23.614127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:18.206 [2024-11-20 12:44:23.614241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:18.206 [2024-11-20 12:44:23.614352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:18.206 [2024-11-20 12:44:23.614353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.776 Malloc0 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.776 [2024-11-20 12:44:24.386887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.776 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.777 [2024-11-20 12:44:24.415635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1106081 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:18.777 12:44:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.682 12:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1105801 00:30:20.682 12:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Write completed with error (sct=0, sc=8) 00:30:20.957 starting I/O failed 00:30:20.957 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 [2024-11-20 12:44:26.443792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 [2024-11-20 12:44:26.443997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Read completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 Write completed with error (sct=0, sc=8) 00:30:20.958 starting I/O failed 00:30:20.958 [2024-11-20 12:44:26.444186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.958 [2024-11-20 12:44:26.444315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.958 [2024-11-20 12:44:26.444338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.958 qpair failed and we were unable to recover it. 00:30:20.958 [2024-11-20 12:44:26.444507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.958 [2024-11-20 12:44:26.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.958 qpair failed and we were unable to recover it. 00:30:20.958 [2024-11-20 12:44:26.444671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.958 [2024-11-20 12:44:26.444681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.958 qpair failed and we were unable to recover it. 00:30:20.958 [2024-11-20 12:44:26.444857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.444867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.445975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.445986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.446935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.446945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.959 [2024-11-20 12:44:26.447837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.959 qpair failed and we were unable to recover it. 00:30:20.959 [2024-11-20 12:44:26.447893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.447903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.447953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.447963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.448876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.448887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 [2024-11-20 12:44:26.449623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.960 [2024-11-20 12:44:26.449633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.960 qpair failed and we were unable to recover it. 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Read completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 Write completed with error (sct=0, sc=8) 00:30:20.960 starting I/O failed 00:30:20.960 [2024-11-20 12:44:26.449864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.960 [2024-11-20 12:44:26.449923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.449941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.450973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.450982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.451957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.451967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.961 [2024-11-20 12:44:26.452971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.961 [2024-11-20 12:44:26.452982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.961 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.453714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.453724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.455154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.455165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.455386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.455396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.455551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.455561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.455783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.455794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.455916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.455926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.456827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.456840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.457931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.457943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.458096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.458109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.458236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.962 [2024-11-20 12:44:26.458249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.962 qpair failed and we were unable to recover it. 00:30:20.962 [2024-11-20 12:44:26.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.458499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.458575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.458588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.458727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.458740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.458810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.458823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.458962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.458976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.459923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.459936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.460098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.460181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.460357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.460499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.460681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.460886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.460994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.461026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.461127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.461158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.461342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.461373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.461567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.461603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.461714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.963 [2024-11-20 12:44:26.461744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.963 qpair failed and we were unable to recover it. 00:30:20.963 [2024-11-20 12:44:26.461863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.461894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.462129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.462162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.462400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.462443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.462689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.462704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.462780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.462796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.462869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.462883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.463097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.463110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.463239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.463252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.463408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.463425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.463497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.463509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.463766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.463991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.464973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.465907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.465920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.466060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.466073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.466130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.466143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.964 [2024-11-20 12:44:26.466341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.964 [2024-11-20 12:44:26.466354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.964 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.466514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.466533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.466731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.466762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.466871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.466903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.467154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.467187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.467387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.467406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.467677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.467695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.467852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.467870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.468112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.468313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.468344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.468569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.468601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.468728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.468761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.468948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.468979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.469170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.469201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.469385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.469446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.469709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.469727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.469823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.469840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.469998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.470015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.470105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.470123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.470352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.470370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.470619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.470638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.470852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.470869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.471131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.471149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.471421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.471440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.471584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.471603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.471756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.471774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.471861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.471879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.471977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.471994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.472155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.472173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.472310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.472328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.472471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.472489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.965 qpair failed and we were unable to recover it. 00:30:20.965 [2024-11-20 12:44:26.472671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.965 [2024-11-20 12:44:26.472688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.472839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.472858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.473111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.473129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.473357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.473389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.473623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.473655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.473839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.473870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.474139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.474169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.474432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.474465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.474653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.474684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.474802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.474839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.474947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.474966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.475106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.475124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.475294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.475313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.475500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.475534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.475747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.476035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.476066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.476258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.476290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.476500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.476533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.476835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.476853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.477100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.477132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.477345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.477369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.477632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.477657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.477810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.477834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.477955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.477980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.478169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.478195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.478445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.478470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.478663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.478687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.478960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.478984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.479174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.479199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.479311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.966 [2024-11-20 12:44:26.479335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.966 qpair failed and we were unable to recover it. 00:30:20.966 [2024-11-20 12:44:26.479521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.479547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.479641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.479810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.479834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.479929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.479953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.480045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.480071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.480266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.480290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.480552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.480582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.480849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.480872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.481057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.481347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.481372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.481578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.481603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.481755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.481779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.482079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.482377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.482408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.482706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.482738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.482995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.483019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.483252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.483276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.483446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.483478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.483663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.483694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.483812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.483845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.484024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.484056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.484254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.484286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.484458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.484505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.484753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.484777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.484888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.484912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.485078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.485102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.485329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.485353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.485536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.485561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.485714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.485739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.485959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.485983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.967 [2024-11-20 12:44:26.486174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.967 [2024-11-20 12:44:26.486207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.967 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.486502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.486535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.486775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.486806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.487052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.487090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.487394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.487438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.487679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.487710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.487833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.487858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.488105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.488127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.488304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.488335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.488599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.488633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.488828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.488859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.489004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.489036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.489304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.489335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.489469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.489502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.489667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.489698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.489962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.489993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.490254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.490286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.490395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.490435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.490644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.490675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.490881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.490916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.491182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.491213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.491396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.491437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.491706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.491738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.492002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.492033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.492241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.492272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.492535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.492569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.492777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.492808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.493044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.493076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.493298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.493330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.493592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.493625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.493805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.493836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.494088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.494120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.494400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.494443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.494665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.968 [2024-11-20 12:44:26.494697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.968 qpair failed and we were unable to recover it. 00:30:20.968 [2024-11-20 12:44:26.494878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.494909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.495160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.495193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.495382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.495420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.495629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.495661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.495863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.495898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.496170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.496201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.496383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.496425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.496594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.496626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.496908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.496939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.497106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.497138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.497294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.497327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.497528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.497561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.497678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.497710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.497883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.497917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.498089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.498120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.498309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.498340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.498552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.498585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.498862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.498893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.499069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.499100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.499372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.499404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.499550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.499581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.499764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.499796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.500056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.500088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.500282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.969 [2024-11-20 12:44:26.500314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.969 qpair failed and we were unable to recover it. 00:30:20.969 [2024-11-20 12:44:26.500501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.500534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.500796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.500827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.501103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.501135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.501321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.501353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.501558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.501591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.501761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.501793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.501964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.501995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.502240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.502271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.502466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.502500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.502666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.502698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.502875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.502910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.503203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.503234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.503424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.503456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.503749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.503787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.504015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.504049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.504244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.504278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.504558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.504590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.504844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.504875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.505165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.505197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.505390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.505429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.505670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.505701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.505836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.505867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.506001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.506032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.506218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.506249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.506426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.506459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.506634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.506668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.970 [2024-11-20 12:44:26.506955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.970 [2024-11-20 12:44:26.506986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.970 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.507239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.507270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.507483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.507516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.507816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.507981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.508013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.508193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.508224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.508465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.508498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.508727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.508758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.508938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.508971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.509174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.509208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.509457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.509490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.509726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.509756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.509936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.509966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.510203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.510234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.510529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.510567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.510748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.510779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.510966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.510997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.511274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.511304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.511470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.511503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.511669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.511699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.511798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.511828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.511945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.511977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.512213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.512243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.512451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.512484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.512756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.512787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.513060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.513093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.513303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.513334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.513604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.513636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.971 qpair failed and we were unable to recover it. 00:30:20.971 [2024-11-20 12:44:26.513911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.971 [2024-11-20 12:44:26.513943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.514122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.514154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.514335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.514365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.514542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.514574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.514890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.514920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.515066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.515102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.515280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.515311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.515563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.515597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.515764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.515796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.516035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.516066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.516383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.516438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.516560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.516594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.516864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.516896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.517161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.517191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.517436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.517469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.517581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.517611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.517880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.517910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.518167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.518197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.518320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.518354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.518668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.518701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.518941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.518972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.519261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.519292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.519492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.519524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.519775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.519806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.519981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.520012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.520254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.520284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.520579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.520611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.520817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.520850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.521032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.521063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.521252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.521283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.521552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.972 [2024-11-20 12:44:26.521585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.972 qpair failed and we were unable to recover it. 00:30:20.972 [2024-11-20 12:44:26.521835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.521866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.522063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.522098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.522365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.522397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.522614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.522645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.522812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.522843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.523081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.523113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.523291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.523322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.523432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.523464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.523659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.523691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.523861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.523892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.524069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.524101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.524360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.524391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.524609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.524641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.524817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.524850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.524966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.524997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.525179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.525210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.525425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.525457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.525646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.525677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.525927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.525958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.526210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.526242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.526484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.526517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.526704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.526735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.526918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.526950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.527251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.527544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.973 [2024-11-20 12:44:26.527577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.973 qpair failed and we were unable to recover it. 00:30:20.973 [2024-11-20 12:44:26.527835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.527866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.528116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.528147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.528435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.528475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.528658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.528689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.528894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.528926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.529216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.529247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.529464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.529497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.529763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.529794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.530058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.530090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.530346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.530377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.530662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.530695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.530895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.531138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.531170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.531358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.531390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.531574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.531606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.531815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.531846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.532047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.532078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.532349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.532379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.532519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.532556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.532694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.532725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.532974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.533005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.533190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.533221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.533394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.533437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.533613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.533647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.533860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.533891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.534073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.534111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.534296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.534327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.534447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.534481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.974 [2024-11-20 12:44:26.534689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.974 [2024-11-20 12:44:26.534721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.974 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.534835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.534866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.534992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.535024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.535135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.535164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.535343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.535375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.535574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.535606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.535801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.535832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.536035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.536206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.536237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.536439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.536471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.536578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.536609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.536854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.536886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.537079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.537110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.537355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.537387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.537592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.537624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.537741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.537772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.537907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.537939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.538176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.538208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.538425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.538458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.538716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.538747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.538934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.538967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.539235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.539266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.539561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.539596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.539757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.539931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.539969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.540253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.540286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.540550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.540583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.975 [2024-11-20 12:44:26.540880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.975 [2024-11-20 12:44:26.540913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.975 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.541040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.541071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.541353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.541385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.541634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.541667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.541862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.541897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.542135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.542167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.542479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.542512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.542764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.542796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.542971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.543002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.543200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.543231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.543508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.543541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.543746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.543926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.544194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.544226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.544467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.544499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.544753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.544785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.545036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.545067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.545248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.545280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.545521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.545555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.545790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.545823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.546010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.546042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.546170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.546202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.546465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.546498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.546666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.546698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.546894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.546926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.547235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.547266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.547440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.547474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.547609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.547826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.547858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.548049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.548082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.548300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.548332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.976 qpair failed and we were unable to recover it. 00:30:20.976 [2024-11-20 12:44:26.548532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.976 [2024-11-20 12:44:26.548567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.548740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.548771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.549008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.549039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.549291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.549323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.549540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.549572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.549835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.549866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.550109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.550141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.550420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.550453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.550689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.550720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.551025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.551057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.551312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.551343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.551545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.551578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.551839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.551869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.552129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.552160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.552343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.552375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.552517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.552549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.552812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.552843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.553131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.553162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.553274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.553306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.553476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.553508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.553694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.553725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.553966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.553998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.554183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.554214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.554334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.554368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.554516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.554548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.554728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.554760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.555018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.555050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.555242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.555274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.555460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.555494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.555681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.555712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.555971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.556003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.556200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.556231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.977 [2024-11-20 12:44:26.556468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.977 [2024-11-20 12:44:26.556501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.977 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.556777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.557074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.557111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.557396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.557437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.557629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.557660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.557832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.557866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.558040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.558074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.558336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.558366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.558648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.558680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.558963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.558994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.559179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.559211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.559431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.559464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.559701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.559733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.559996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.560027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.560317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.560347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.560520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.560552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.560764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.560932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.560963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.561162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.561193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.561434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.561466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.561671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.561705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.561902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.561933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.562224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.562448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.562481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.562742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.562774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.563038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.563251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.563282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.563538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.563571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.563879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.563910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.564116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.564155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.564431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.564465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.564591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.978 [2024-11-20 12:44:26.564623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.978 qpair failed and we were unable to recover it. 00:30:20.978 [2024-11-20 12:44:26.564915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.564946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.565216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.565247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.565369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.565404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.565689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.565721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.566009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.566041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.566260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.566292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.566560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.566594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.566831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.566864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.567175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.567206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.567481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.567514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.567638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.567670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.567869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.567900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.568165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.568198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.568396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.568435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.568691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.568722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.569028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.569060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.569322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.569354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.569650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.569839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.569870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.570140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.570172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.570367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.570402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.570667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.570701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.570993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.571026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.571293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.571324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.571622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.571656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.571862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.571895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.572082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.572113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.572220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.572252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.572428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.572460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.572696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.572728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.572923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.979 [2024-11-20 12:44:26.572957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.979 qpair failed and we were unable to recover it. 00:30:20.979 [2024-11-20 12:44:26.573125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.573157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.573365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.573397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.573667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.573700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.573983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.574014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.574241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.574272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.574388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.574428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.574612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.574643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.574925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.574993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.575307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.575342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.575517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.575552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.575723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.575754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.575958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.575992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.576179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.576210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.576481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.576517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.576635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.576667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.576912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.576945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.577142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.577174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.577377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.577409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.577680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.577711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.577986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.578017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.578304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.578346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.578611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.578644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.578886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.578917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.579236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.579269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.579525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.579839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.579871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.580071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.580105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.580296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.580327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.580586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.580620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.580857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.580889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.581128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.581160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.581429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.980 [2024-11-20 12:44:26.581462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.980 qpair failed and we were unable to recover it. 00:30:20.980 [2024-11-20 12:44:26.581745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.581777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.581955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.581986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.582213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.582245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.582444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.582477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.582659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.582690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.582985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.583176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.583210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.583474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.583508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.583681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.583712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.583976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.584008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.584277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.584309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.584517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.584549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.584839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.584872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.585091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.585123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.585389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.585428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.585680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.585747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.586038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.586074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.586363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.586396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.586621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.586653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.586921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.586954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.587157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.587191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.587457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.587491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.587679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.587711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.587917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.587949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.588051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.588083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.588350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.588382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.588728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.588800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.589096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.981 [2024-11-20 12:44:26.589133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.981 qpair failed and we were unable to recover it. 00:30:20.981 [2024-11-20 12:44:26.589316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.589353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.589619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.589653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.589828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.589859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.590064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.590096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.590335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.590367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.590640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.590672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.590805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.590838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.591072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.591104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.591340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.591372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.591619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.591652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.591916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.591948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.592242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.592274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.592546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.592578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.592820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.592852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.593122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.593155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.593429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.593461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.593693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.593817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.593849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.594111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.594143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.594324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.594355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.594677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.594710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.594893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.594927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.595111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.595143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.595398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.595448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.595740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.595771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.596026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.596058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.596310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.596342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.596635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.596674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.596964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.596996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.597268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.597300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.597572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.597605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.597861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.597892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.598068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.982 [2024-11-20 12:44:26.598100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.982 qpair failed and we were unable to recover it. 00:30:20.982 [2024-11-20 12:44:26.598311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.598342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.598605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.598638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.598822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.598854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.599118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.599149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.599423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.599476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.599742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.599774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.599959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.599995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.600199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.600232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.600478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.600511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.600679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.600711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.600890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.600922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.601160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.601191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.601462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.601495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.601666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.601698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.601932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.601963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.602230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.602262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.602558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.602592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.602856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.602887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.603104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.603135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.603406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.603453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.603692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.603723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.603921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.603953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.604050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.604082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.604286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.604319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.604505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.604539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.604802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.604833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.605087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.605118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.605379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.605410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.605602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.605824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.605855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.983 qpair failed and we were unable to recover it. 00:30:20.983 [2024-11-20 12:44:26.606121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.983 [2024-11-20 12:44:26.606158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.606400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.606441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.606691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.606722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.606835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.606867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.607110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.607147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.607453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.607487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.607715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.607746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.607928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.607960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.608217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.608248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.608520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.608554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.608661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.608692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.608875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.608906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.609018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.609049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.609216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.609248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.609445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.609477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.609752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.609783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.609980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.610012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.610196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.610228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.610500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.610533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.610714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.610746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.610981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.611013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.611130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.611161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.611453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.611486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.611754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.611785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.611983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.612016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.612264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.612296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.612419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.612452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.612652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.612683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.612861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.612893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.612994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.984 [2024-11-20 12:44:26.613024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.984 qpair failed and we were unable to recover it. 00:30:20.984 [2024-11-20 12:44:26.613194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.613226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.613428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.613460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.613728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.613760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.613964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.613995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.614175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.614207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.614388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.614427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.614619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.614651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.614917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.614948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.615132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.615164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.615341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.615373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.615604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.615637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.615859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.615890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.616174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.616206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.616491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.616523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.616736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.616773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.616999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.617031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.617298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.617330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.617551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.617584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.617794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.617825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.618081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.618112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.618376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.618407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.618653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.618685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.618974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.619004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.619186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.619218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.619449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.619482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.619659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.619691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.619857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.619888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.620023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.620054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.620340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.620372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.620507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.985 [2024-11-20 12:44:26.620540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.985 qpair failed and we were unable to recover it. 00:30:20.985 [2024-11-20 12:44:26.620834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.620865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.621109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.621141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.621401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.621442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.621717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.621749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.621986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.622017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.622330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.622361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.622572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.622604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.622794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.622826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.623038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.623068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.623242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.623274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.623592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.623625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.623809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.623840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.624025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.624056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.624294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.624325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.624490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.624523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.624770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.624801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.625063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.625095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.625278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.625308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.625583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.625616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.625745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.625777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.625974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.626005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.626265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.626297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.626568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.626601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.626772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.626803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.986 qpair failed and we were unable to recover it. 00:30:20.986 [2024-11-20 12:44:26.627076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.986 [2024-11-20 12:44:26.627119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.627385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.627440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.627707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.627738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.628012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.628042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.628332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.628364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.628555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.628587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.628694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.628726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.628962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.628993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.629275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.629306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.629505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.629538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.629814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.629846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.630031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.630063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.630285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.630342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.630553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.630590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.630896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.630930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.631113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.631145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.631331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.631364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.631635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.631668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.631844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.631876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.632145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.632176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.632354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.632386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.632656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.632688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.632819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.632853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.633056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.633088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.633283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.633314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.633515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.633548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.633788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.633820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.634055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.634093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.634420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.634453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.634716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.634747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.634959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.634992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.635160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.635192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.987 [2024-11-20 12:44:26.635482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.987 [2024-11-20 12:44:26.635514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.987 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.635695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.635730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.635901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.635932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.636120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.636152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.636348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.636379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.636545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511f60 is same with the state(6) to be set 00:30:20.988 [2024-11-20 12:44:26.636855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.636924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.637173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.637209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.637505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.637540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.637778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.637819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.638026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.638057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.638319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.638351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.638616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.638649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.638838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.638870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.639039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.639071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.639186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.639217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.639476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.639509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.639608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.639639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.639811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.639842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.640096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.640129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.640430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.640463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.640721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.640753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.640942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.640973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.641242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.641275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.641481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.641514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.641632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.641666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.641798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.641830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.642036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.642067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.642247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.642282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.642537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.642569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.642755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.642786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.643000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.643032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.643300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.643331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.643500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.643709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.643742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.988 qpair failed and we were unable to recover it. 00:30:20.988 [2024-11-20 12:44:26.643928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.988 [2024-11-20 12:44:26.643960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.644231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.644263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.644534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.644566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.644806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.644838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.645108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.645140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.645316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.645349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.645621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.645655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.645925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.645957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.646166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.646201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.646432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.646464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.646756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.646788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.647047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.647079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.647287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.647318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.647567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.647601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.647818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.647856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.648093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.648126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.648363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.648395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.648666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.648698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.648986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.649018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.649184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.649215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.649485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.649519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.649792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.649824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.650104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.650135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.650382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.650422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.650649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.650680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.650896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.650927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.651098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.651130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.651428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.651462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.651734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.651767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.652039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.652070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.652249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.652460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.652496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.652764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.652795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.652999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.653031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.989 [2024-11-20 12:44:26.653283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.989 [2024-11-20 12:44:26.653315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.989 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.653575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.653608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.653894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.653925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.654118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.654150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.654422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.654454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.654740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.654772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.654938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.654971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.655246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.655278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.655561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.655593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.655774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.655805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.656076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.656108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.656289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.656321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.656560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.656593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.656795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.656827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.656941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.656973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.657236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.657268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.657539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.657573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.657860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.657891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.658085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.658118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.658357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.658389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.658638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.658677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.658968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.659000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.659262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.659293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.659573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.659605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.659876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.659907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.660089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.660121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.660306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.660337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.660591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.660623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.660814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.660846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.661032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.990 [2024-11-20 12:44:26.661063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.990 qpair failed and we were unable to recover it. 00:30:20.990 [2024-11-20 12:44:26.661247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.661279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.661387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.661429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.661615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.661646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.661907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.661938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.662132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.662163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.662375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.662408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.662588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.662621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.662867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.663072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.663104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.663396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.663456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.663644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.663676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.663943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.663974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.664100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.664135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.664325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.664356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.664639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.664672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.664939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.664971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.665216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.665247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.665434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.665468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.665735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.665766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.666005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.666037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.666306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.666337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.666528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.666561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.666758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.666790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.666972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.667003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.667213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.667246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.667453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.667485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.667669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.667702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.667963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.667995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.668165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.668196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.668409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.668452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.668688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.668726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.668997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.669028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.991 qpair failed and we were unable to recover it. 00:30:20.991 [2024-11-20 12:44:26.669313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.991 [2024-11-20 12:44:26.669344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.669548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.669581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.669760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.669791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.669975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.670007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.670186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.670218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.670342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.670374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.670650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.670885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.670916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.671090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.671122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.671338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.671369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.671617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.671650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.671941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.671972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.672244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.672276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.672536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.672570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.672862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.673161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.673192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.673446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.673478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.673646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.673678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.673851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.673882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.674060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.674091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.674257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.674288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.674555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.674587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.674777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.674809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.675050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.675081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.675261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.675292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.675442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.675477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.675720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.992 [2024-11-20 12:44:26.675752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.992 qpair failed and we were unable to recover it. 00:30:20.992 [2024-11-20 12:44:26.675946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.675978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.676215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.676246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.676434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.676468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.676709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.676740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.676912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.676943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.677195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.677227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.677430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.677461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.677637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.677671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.677880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.677911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.678203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.678396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.678438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.678711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.678749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.678871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.678903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.679165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.679196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.679492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.679525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.679789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.679821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.680025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.680057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.680328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.680359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.680673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.680705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.680942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.680974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.681142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.681174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.681462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.681495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.681680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.681712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.681890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.681923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.682183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.682213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.682512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.682545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.682752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.682783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.683026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.683058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.683270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.683300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.683565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.683597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.683873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.683904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.993 [2024-11-20 12:44:26.684203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.993 [2024-11-20 12:44:26.684234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.993 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.684441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.684474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.684767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.685019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.685050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.685265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.685297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.685479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.685513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.685777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.685808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.685920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.685955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.686245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.686277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.686512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.686544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.686724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.686755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.687059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.687090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.687276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.687307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.687529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.687562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.687742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.687773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.687940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.687971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.688075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.688106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.688368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.688399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.688678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.688709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.688988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.689019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.689303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.689341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.689529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.689562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.689730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.689761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.689958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.689992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.690271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.690302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.690419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.690453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.690715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.690746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.691034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.691066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.691340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.994 [2024-11-20 12:44:26.691371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.994 qpair failed and we were unable to recover it. 00:30:20.994 [2024-11-20 12:44:26.691563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.691595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.691769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.691802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.692054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.692085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.692262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.692294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.692474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.692506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.692829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.692862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.693002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.693034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.693299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.693331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.693453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.693487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.693604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.693635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.693893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.693925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.694118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.694149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.694337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.694368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.694574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.694605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.694920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.694953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.695074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.695105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.695373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.695404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.695529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.695561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.695746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.695777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.695913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.695944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.696133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.696165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.696286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.696316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.696499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.696532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.696781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.696812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.696937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.696968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.697263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.697294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.697563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.697596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.697711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.697743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.697917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.697948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.698072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.698104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.698363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.698395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.698691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.698730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.995 qpair failed and we were unable to recover it. 00:30:20.995 [2024-11-20 12:44:26.698969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.995 [2024-11-20 12:44:26.699001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.699234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.699265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.699449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.699482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.699668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.699699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.699968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.699999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.700120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.700152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.700395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.700446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.700659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.700691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.700933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.700964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.701265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.701296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:20.996 [2024-11-20 12:44:26.701588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.996 [2024-11-20 12:44:26.701621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:20.996 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.701892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.701923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.702044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.702080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.702431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.702465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.702780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.702812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.702997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.703029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.703220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.703252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.703458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.703490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.703693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.703725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.703989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.704021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.704120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.704152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.704327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.704593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.704627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.704866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.704897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.705073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.705105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.705210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.705242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.705437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.705470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.705661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.705692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.705969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.706000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.706238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.706270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.706536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.706568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.706817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.706849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.707019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.707051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.707237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.707270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.707550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.707583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.707704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.707736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.707997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.708029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.708131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.708165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.708438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.708470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.274 qpair failed and we were unable to recover it. 00:30:21.274 [2024-11-20 12:44:26.708764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.274 [2024-11-20 12:44:26.708802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.708991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.709022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.709317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.709349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.709468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.709501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.709774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.709805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.710012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.710043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.710236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.710268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.710526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.710558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.710812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.710844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.711080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.711112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.711376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.711408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.711550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.711581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.711797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.711828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.712082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.712114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.712369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.712400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.712661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.712693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.712876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.712908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.713167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.713199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.713491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.713524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.713709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.713740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.714002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.714034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.714204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.714236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.714420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.714453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.714620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.714825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.714856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.715141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.715172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.715433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.715465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.715686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.715755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.716063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.716099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.716364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.716396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.716623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.716655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.716780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.716812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.717022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.717054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.717250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.717281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.717464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.717496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.717676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.717706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.717808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.275 [2024-11-20 12:44:26.717839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.275 qpair failed and we were unable to recover it. 00:30:21.275 [2024-11-20 12:44:26.718026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.718058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.718350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.718382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.718688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.718726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.719025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.719067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.719352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.719383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.719583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.719616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.719828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.719859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.719980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.720012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.720191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.720223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.720397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.720436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.720558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.720590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.720854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.720886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.721073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.721105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.721370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.721403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.721695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.721728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.721999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.722030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.722227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.722258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.722467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.722503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.722773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.722805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.723019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.723213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.723245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.723526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.723558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.723742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.723774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.723951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.723982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.724159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.724190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.724373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.724405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.724702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.724734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.724913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.724944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.725210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.725241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.725551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.725584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.725830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.725863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.726048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.726079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.726255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.726289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.726395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.726440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.726703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.726734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.726910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.726941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.727128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.727160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.276 [2024-11-20 12:44:26.727432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.276 [2024-11-20 12:44:26.727464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.276 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.727565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.727596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.727786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.727817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.727937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.727969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.728133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.728164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.728368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.728399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.728545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.728582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.728842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.728875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.728989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.729020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.729231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.729263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.729444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.729478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.729655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.729685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.729966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.729997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.730286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.730318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.730585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.730617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.730914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.730946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.731163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.731195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.731407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.731448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.731627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.731658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.731802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.732036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.732068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.732258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.732289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.732492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.732524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.732732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.732763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.732981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.733014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.733205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.733237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.733494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.733526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.733791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.733822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.734041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.734074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.734289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.734320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.734544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.734576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.734753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.734788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.735026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.735058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.735296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.735328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.735572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.735605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.735792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.735823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.735989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.736020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.736314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.736345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.277 qpair failed and we were unable to recover it. 00:30:21.277 [2024-11-20 12:44:26.736627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.277 [2024-11-20 12:44:26.736660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.736898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.736929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.737231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.737262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.737447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.737480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.737666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.737699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.737833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.737865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.738184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.738215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.738406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.738447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.738621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.738660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.738923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.738955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.739132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.739163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.739441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.739474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.739656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.739687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.739866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.739897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.740085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.740117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.740386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.740423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.740693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.740726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.740931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.740963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.741147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.741178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.741384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.741425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.741638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.741672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.741797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.741828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.742109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.742141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.742432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.742464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.742738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.742768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.743051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.743369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.743401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.743623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.743654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.743778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.743809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.744099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.744130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.744317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.744348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.744589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.278 [2024-11-20 12:44:26.744622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.278 qpair failed and we were unable to recover it. 00:30:21.278 [2024-11-20 12:44:26.744756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.744789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.744966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.744999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.745173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.745205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.745462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.745505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.745780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.745812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.746115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.746147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.746332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.746363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.746617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.746651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.746839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.746871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.747055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.747087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.747263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.747295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.747548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.747580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.747818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.747850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.748026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.748058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.748247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.748279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.748513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.748546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.748816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.748858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.749028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.749060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.749249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.749279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.749550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.749583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.749878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.750068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.750104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.750363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.750394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.750587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.750621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.750923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.751131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.751163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.751353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.751385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.751635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.751676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.752023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.752055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.752270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.752302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.752504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.752537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.752782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.752935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.753195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.753227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.753400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.753441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.753694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.753726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.753913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.753944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.754115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.279 [2024-11-20 12:44:26.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.279 qpair failed and we were unable to recover it. 00:30:21.279 [2024-11-20 12:44:26.754258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.754289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.754429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.754461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.754703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.754733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.754845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.754877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.755071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.755102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.755339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.755408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.755642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.755679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.755868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.755901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.756102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.756134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.756321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.756353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.756535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.756568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.756760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.756792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.756917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.756948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.757133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.757166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.757353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.757388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.757666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.757699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.757986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.758018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.758140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.758172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.758340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.758383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.758637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.758677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.758926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.758958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.759085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.759330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.759362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.759635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.759667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.759778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.759810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.760044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.760077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.760329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.760360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.760478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.760510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.760633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.760664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.760841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.760873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.761047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.761078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.761258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.761290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.761592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.761626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.761815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.761846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.762112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.762143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.762432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.762465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.762740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.762772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-20 12:44:26.762882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.280 [2024-11-20 12:44:26.762914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.763109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.763141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.763331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.763362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.763593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.763625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.763838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.763869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.764135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.764166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.764377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.764407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.764591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.764622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.764812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.764852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.765100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.765148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.765390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.765433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.765613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.765645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.765826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.765858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.766030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.766062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.766323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.766354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.766626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.766659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.766948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.767166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.767197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.767374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.767405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.767657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.767688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.767902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.767933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.768111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.768142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.768305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.768336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.768438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.768470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.768663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.768695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.768884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.768918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.769157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.769189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.769435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.769468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.769605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.769637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.769904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.769936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.770105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.770135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.770336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.770367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.770638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.770671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.770930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.770962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.771155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.771187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.771404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.771444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.771641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.771672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.771857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.281 [2024-11-20 12:44:26.771889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-20 12:44:26.772129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.772160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.772343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.772374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.772626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.772659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.772840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.772871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.773144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.773175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.773295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.773326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.773599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.773632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.773876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.773908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.774027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.774058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.774183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.774213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.774350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.774388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.774508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.774542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.774781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.774813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.774940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.774971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.775207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.775238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.775423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.775455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.775640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.775672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.775789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.775820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.775989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.776289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.776321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.776501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.776532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.776716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.776746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.776866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.776897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.777072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.777106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.777321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.777353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.777546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.777579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.777838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.777869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.778104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.778136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.778406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.778643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.778675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.778921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.778952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.779245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.779276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.779567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.779601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.779900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.780190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.780221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.780429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.780462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.780634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.780665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.780850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.282 [2024-11-20 12:44:26.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-20 12:44:26.781048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.781080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.781271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.781301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.781521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.781553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.781773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.781804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.782016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.782047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.782291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.782321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.782527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.782558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.782727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.782757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.783000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.783032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.783159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.783191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.783493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.783527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.783786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.783817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.784104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.784142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.784330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.784362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.784536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.784568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.784755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.784786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.785000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.785032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.785203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.785235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.785429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.785461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.785741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.785772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.785940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.785971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.786159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.786190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.786317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.786348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.786586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.786618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.786819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.786851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.786967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.786998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.787191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.787222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.787347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.787379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.787522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.787554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.787795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.787827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.787935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.787968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.788204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.788375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.283 [2024-11-20 12:44:26.788406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.283 qpair failed and we were unable to recover it. 00:30:21.283 [2024-11-20 12:44:26.788701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.788733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.788983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.789015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.789192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.789224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.789420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.789452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.789618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.789650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.789835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.789867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.789992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.790027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.790173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.790205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.790381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.790422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.790648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.790679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.790923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.790955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.791227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.791326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.791358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.791561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.791594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.791780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.791812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.792004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.792035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.792289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.792320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.792498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.792532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.792824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.792855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.793024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.793068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.793264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.793296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.793568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.793601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.793886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.793919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.794094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.794126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.794324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.794355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.794483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.794515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.794751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.794781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.794919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.794951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.795147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.795178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.795430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.795463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.795636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.795667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.795958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.795989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.796248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.796278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.796568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.796602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.796840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.796872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.797142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.797173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.797273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.797304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.284 [2024-11-20 12:44:26.797542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.284 [2024-11-20 12:44:26.797576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.284 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.797710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.797741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.797943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.797975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.798145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.798176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.798365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.798398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.798677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.798708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.798886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.798918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.799214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.799246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.799515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.799548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.799771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.799803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.799991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.800022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.800143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.800174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.800291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.800323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.800638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.800671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.800849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.800881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.801145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.801176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.801359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.801391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.801594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.801627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.801890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.801921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.802109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.802141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.802244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.802275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.802548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.802581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.802773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.802815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.803067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.803098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.803371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.803403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.803714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.803745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.803857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.803889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.804109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.804140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.804307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.804339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.804518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.804550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.804791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.804821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.805029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.805061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.805272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.805303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.805479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.805511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.805702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.805733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.805975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.806006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.806204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.806236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.806365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.806396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.285 [2024-11-20 12:44:26.806605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.285 [2024-11-20 12:44:26.806636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.285 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.806872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.806902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.807138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.807171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.807487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.807521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.807797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.807828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.808065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.808096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.808263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.808295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.808567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.808600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.808714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.808745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.808916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.808947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.809145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.809176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.809281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.809314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.809551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.809584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.809769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.809800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.809986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.810018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.810272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.810303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.810474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.810506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.810762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.810794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.811086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.811117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.811303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.811335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.811461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.811494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.811787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.811818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.811941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.811977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.812240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.812272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.812374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.812422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.812597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.812628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.812816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.812848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.813063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.813095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.813212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.813242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.813436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.813468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.813655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.813685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.813921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.813952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.814204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.814235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.814452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.814485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.814684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.814717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.815005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.815038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.815349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.815380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.815579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.286 [2024-11-20 12:44:26.815612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.286 qpair failed and we were unable to recover it. 00:30:21.286 [2024-11-20 12:44:26.815811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.815843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.815960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.815993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.816166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.816196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.816384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.816421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.816668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.816700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.816952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.816984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.817235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.817267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.817461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.817494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.817706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.817738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.817915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.817947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.818164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.818195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.818390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.818429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.818612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.818644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.818957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.818990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.819245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.819277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.819545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.819577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.819782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.819814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.819991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.820022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.820297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.820331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.820503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.820536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.820744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.820776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.820962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.820994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.821215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.821246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.821430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.821462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.821597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.821628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.821812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.821843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.822015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.822054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.822179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.822209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.822410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.822454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.822691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.822722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.822835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.822865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.823156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.823187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.823403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.823444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.823702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.823735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.823930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.823962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.824139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.824170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.824344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.287 [2024-11-20 12:44:26.824375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.287 qpair failed and we were unable to recover it. 00:30:21.287 [2024-11-20 12:44:26.824679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.824712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.824920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.824951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.825069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.825100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.825313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.825345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.825506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.825539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.825777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.825809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.826080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.826112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.826320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.826352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.826528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.826561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.826681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.826714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.826885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.826917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.827150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.827181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.827398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.827436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.827703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.827735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.827941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.827973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.828156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.828186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.828377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.828408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.828515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.828546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.828792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.828826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.829048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.829081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.829200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.829243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.829563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.829745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.829776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.829960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.829992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.830168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.830201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.830485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.830517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.830701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.830732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.830996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.831027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.831295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.831325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.831511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.831550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.831809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.831840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.832007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.832038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.832225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.832255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.832537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.832570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.288 [2024-11-20 12:44:26.832777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.288 [2024-11-20 12:44:26.832808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.288 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.832990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.833021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.833206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.833236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.833353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.833385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.833537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.833568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.833677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.833707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.833997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.834108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.834141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.834390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.834432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.834730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.834762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.834946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.834977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.835257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.835289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.835560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.835592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.835801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.835834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.836019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.836050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.836310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.836341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.836521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.836554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.836737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.836769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.836967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.836998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.837127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.837158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.837399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.837441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.837705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.837736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.837928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.837960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.838225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.838257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.838433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.838465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.838632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.838663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.838939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.838970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.839145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.839175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.839353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.839384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.839568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.839637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.839882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.839918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.840195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.840227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.840394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.840444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.840696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.840728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.840935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.840966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.841152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.841194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.841383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.841427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.841599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.841631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.289 qpair failed and we were unable to recover it. 00:30:21.289 [2024-11-20 12:44:26.841812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.289 [2024-11-20 12:44:26.841844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.842018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.842050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.842250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.842282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.842469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.842502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.842723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.842755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.842995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.843027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.843264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.843295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.843485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.843518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.843721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.843753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.844016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.844048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.844247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.844279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.844520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.844554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.844735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.844766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.844941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.844973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.845258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.845291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.845478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.845514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.845683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.845716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.845994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.846026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.846294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.846326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.846621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.846655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.846846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.846877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.847050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.847082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.847338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.847370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.847658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.847690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.847982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.848022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.848148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.848180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.848444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.848477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.848681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.848712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.848884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.848915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.849086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.849117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.849292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.849509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.849541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.849749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.849780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.850073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.850105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.850224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.850256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.850451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.850483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.850748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.850779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.850903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.290 [2024-11-20 12:44:26.850947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.290 qpair failed and we were unable to recover it. 00:30:21.290 [2024-11-20 12:44:26.851124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.851156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.851358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.851389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.851685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.851716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.852014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.852046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.852258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.852290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.852398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.852438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.852675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.852707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.852891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.852921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.853158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.853190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.853363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.853396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.853607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.853638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.853839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.853871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.854134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.854166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.854351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.854384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.854636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.854668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.854963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.854995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.855104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.855135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.855334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.855366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.855586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.855620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.855912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.855943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.856181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.856213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.856399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.856444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.856571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.856602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.856865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.856896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.857081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.857113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.857321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.857352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.857616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.857650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.857867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.857898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.858109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.858141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.858320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.858351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.858624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.858657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.858896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.859167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.859199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.859459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.859492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.859660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.859692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.859810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.859841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.860085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.860117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.860226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.291 [2024-11-20 12:44:26.860258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.291 qpair failed and we were unable to recover it. 00:30:21.291 [2024-11-20 12:44:26.860444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.860477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.860739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.860776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.860957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.860992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.861257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.861288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.861478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.861511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.861750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.861782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.861978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.862008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.862247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.862279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.862465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.862499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.862711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.862742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.862960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.862992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.863170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.863202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.863475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.863507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.863704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.863735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.863989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.864021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.864292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.864324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.864525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.864557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.864732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.864764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.864959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.864991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.865185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.865217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.865409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.865466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.865710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.865743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.866004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.866035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.866137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.866169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.866346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.866378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.866554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.866588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.866808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.866839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.867032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.867063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.867291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.867363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.867544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.867586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.867769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.867811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.867924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.867956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.868236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.292 [2024-11-20 12:44:26.868268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.292 qpair failed and we were unable to recover it. 00:30:21.292 [2024-11-20 12:44:26.868547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.868583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.868867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.868901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.869173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.869205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.869422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.869456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.869646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.869682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.869872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.869905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.870171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.870204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.870375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.870408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.870638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.870684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.870874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.870906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.871177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.871209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.871467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.871508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.871752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.871787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.872081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.872115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.872397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.872444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.872707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.872741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.873008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.873041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.873228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.873259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.873466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.873498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.873789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.873831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.874093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.874128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.874304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.874337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.874555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.874592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.874839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.874870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.875109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.875141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.875321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.875353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.875550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.875583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.875800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.875832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.876093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.876125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.876365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.876397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.876513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.876546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.876741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.876774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.876983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.877014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.877185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.877216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.877486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.877518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.877714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.877746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.877935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.877967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.293 [2024-11-20 12:44:26.878176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.293 [2024-11-20 12:44:26.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.293 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.878464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.878496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.878770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.878809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.879048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.879079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.879300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.879332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.879594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.879626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.879888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.879919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.880161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.880193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.880458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.880491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.880638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.880671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.880857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.880889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.881180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.881218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.881494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.881527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.881729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.881761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.882000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.882032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.882295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.882327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.882521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.882560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.882732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.882764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.882980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.883010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.883187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.883219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.883431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.883464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.883703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.883736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.884028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.884059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.884298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.884331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.884596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.884629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.884826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.884857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.885038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.885070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.885261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.885294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.885486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.885520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.885709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.885741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.885847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.885878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.886059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.886090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.886343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.886380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.886581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.886616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.886871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.886905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.887074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.887106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.887400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.887451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.887584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.887616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.294 qpair failed and we were unable to recover it. 00:30:21.294 [2024-11-20 12:44:26.887798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.294 [2024-11-20 12:44:26.887840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.888052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.888083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.888308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.888340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.888580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.888615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.888787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.888822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.889026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.889061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.889231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.889262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.889530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.889570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.889814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.889852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.889971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.890003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.890251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.890285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.890574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.890606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.890881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.890915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.891199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.891233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.891455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.891489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.891669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.891701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.891968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.891999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.892205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.892238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.892486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.892523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.892798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.892832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.893016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.893047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.893309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.893342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.893637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.893944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.893978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.894160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.894192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.894369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.894409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.894631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.894668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.894797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.894829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.895000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.895031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.895277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.895309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.895444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.895483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.895665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.895699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.895886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.896110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.896142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.896424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.896458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.896734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.896768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.896963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.896995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.897251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.295 [2024-11-20 12:44:26.897285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.295 qpair failed and we were unable to recover it. 00:30:21.295 [2024-11-20 12:44:26.897461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.897495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.897681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.897827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.897865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.898152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.898185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.898398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.898441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.898608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.898639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.898827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.898860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.899058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.899088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.899266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.899300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.899498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.899531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.899714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.899746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.899918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.899950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.900241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.900279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.900537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.900571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.900782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.900815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.900937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.900968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.901224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.901257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.901427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.901460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.901760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.901797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.901973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.902004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.902236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.902267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.902456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.902488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.902738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.902770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.903062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.903095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.903369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.903401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.903679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.903711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.903936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.903968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.904253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.904285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.904494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.904528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.904809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.904842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.905029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.905061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.905302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.905335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.905606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.905639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.905805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.905837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.905963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.905995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.906265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.906296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.906573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.296 [2024-11-20 12:44:26.906818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.296 [2024-11-20 12:44:26.906851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.296 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.907029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.907061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.907228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.907261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.907469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.907501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.907752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.907785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.907887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.907925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.908104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.908139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.908348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.908380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.908560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.908592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.908855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.908887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.909165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.909196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.909323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.909360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.909546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.909578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.909816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.909847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.910035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.910065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.910200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.910230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.910406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.910458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.910739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.910772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.911023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.911059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.911312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.911346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.911545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.911579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.911829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.911861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.912047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.912082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.912201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.912234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.912333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.912375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.912602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.912789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.912824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.913109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.913142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.913332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.913364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.913615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.913649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.913846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.913881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.914004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.914034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.914290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.914322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.914612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-11-20 12:44:26.914645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.297 qpair failed and we were unable to recover it. 00:30:21.297 [2024-11-20 12:44:26.914838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.914873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.915051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.915086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.915263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.915293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.915492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.915526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.915737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.915768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.915869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.915901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.916093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.916126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.916423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.916460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.916733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.916987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.917019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.917285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.917317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.917512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.917553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.917806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.917839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.917965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.917996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.918234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.918265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.918533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.918566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.918770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.918807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.918929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.918961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.919162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.919193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.919376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.919406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.919653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.919685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.919863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.919893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.920079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.920117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.920318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.920350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.920533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.920566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.920826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.920859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.921059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.921090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.921217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.921248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.921543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.921577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.921796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.921829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.922012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.922043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.922157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.922189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.922369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.922401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.922671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.922704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.922919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.922951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.923127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.923162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.923409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.923453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.923575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.298 [2024-11-20 12:44:26.923609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.298 qpair failed and we were unable to recover it. 00:30:21.298 [2024-11-20 12:44:26.923811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.923843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.924035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.924254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.924285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.924489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.924522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.924705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.924738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.924875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.924908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.925146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.925177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.925308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.925339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.925609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.925644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.925835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.925867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.926040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.926349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.926380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.926678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.926710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.926834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.926871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.927127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.927158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.927448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.927480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.927705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.927736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.927878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.927909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.928136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.928169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.928453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.928485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.928671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.928703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.928943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.928975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.929231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.929268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.929466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.929499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.929606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.929638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.929832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.929863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.930124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.930155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.930371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.930619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.930651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.930891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.930922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.931170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.931201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.931374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.931403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.931688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.931721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.931907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.931937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.932077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.932107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.932307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.932475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.932507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.299 [2024-11-20 12:44:26.932801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.299 [2024-11-20 12:44:26.932833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.299 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.933034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.933066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.933251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.933282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.933460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.933492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.933688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.933717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.933980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.934011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.934302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.934333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.934482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.934514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.934708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.934738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.934860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.934891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.934992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.935022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.935276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.935307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.935610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.935643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.935853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.935885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.936113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.936408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.936451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.936714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.936754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.937024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.937055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.937238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.937270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.937459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.937493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.937668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.937700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.937962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.937993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.938182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.938214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.938428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.938461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.938672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.938704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.938925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.938957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.939133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.939164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.939342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.939373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.939578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.939612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.939820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.939852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.940094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.940126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.940300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.940333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.940540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.940574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.940767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.940798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.940965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.940996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.941302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.941510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.941543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.941747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.941779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.300 [2024-11-20 12:44:26.941964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.300 [2024-11-20 12:44:26.941996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.300 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.942171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.942201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.942419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.942452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.942627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.942659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.942954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.942988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.943252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.943285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.943610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.943643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.943924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.943956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.944236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.944268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.944447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.944480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.944720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.944752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.944998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.945030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.945226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.945259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.945431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.945466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.945660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.945692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.945953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.945984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.946235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.946267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.946447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.946479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.946743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.946782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.946955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.946987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.947224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.947254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.947507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.947539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.947778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.947810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.947982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.948013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.948191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.948221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.948514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.948547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.948750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.948916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.948947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.949144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.949176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.949492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.949526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.949698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.949728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.949913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.949943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.950206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.950237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.301 [2024-11-20 12:44:26.950358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.301 [2024-11-20 12:44:26.950390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.301 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.950607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.950638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.950743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.950773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.951016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.951048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.951335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.951367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.951570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.951603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.951789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.951820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.951946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.951978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.952124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.952155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.952273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.952304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.952479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.952510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.952746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.952778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.953076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.953108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.953405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.953643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.953675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.953853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.953885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.954121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.954152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.954391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.954434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.954673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.954705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.954872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.954903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.955167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.955198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.955481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.955514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.955720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.955751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.955941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.955972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.956233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.956265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.956558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.956595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.956861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.956893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.957186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.957399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.957444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.957660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.957829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.957861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.958118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.958150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.958393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.958433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.958623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.958655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.958843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.958874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.959054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.959085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.959277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.959308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.959545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.959577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.302 qpair failed and we were unable to recover it. 00:30:21.302 [2024-11-20 12:44:26.959788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.302 [2024-11-20 12:44:26.959820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.960069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.960100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.960366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.960399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.960584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.960614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.960874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.960906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.961213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.961245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.961526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.961559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.961811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.961843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.961975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.962007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.962294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.962327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.962498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.962532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.962784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.962816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.963018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.963049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.963220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.963251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.963508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.963541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.963785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.963817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.963993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.964026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.964205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.964237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.964428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.964460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.964678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.964959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.964991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.965277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.965317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.965490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.965523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.965659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.965691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.965954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.965986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.966106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.966137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.966400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.966456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.966722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.966762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.967027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.967059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.967346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.967378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.967662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.967695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.967892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.967923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.968146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.968178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.968308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.968340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.968510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.968542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.968780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.968812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.969109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.969141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.969405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.303 [2024-11-20 12:44:26.969459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.303 qpair failed and we were unable to recover it. 00:30:21.303 [2024-11-20 12:44:26.969672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.969703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.969918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.969950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.970185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.970217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.970431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.970464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.970635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.970667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.970956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.970988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.971257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.971288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.971545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.971578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.971872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.971904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.972184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.972216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.972428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.972461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.972710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.972742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.972913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.972945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.973241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.973273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.973522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.973555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.973685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.973716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.974049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.974119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.974425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.974462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.974731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.974763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.974953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.974985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.975154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.975184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.975397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.975440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.975725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.975756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.976020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.976051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.976219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.976250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.976460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.976492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.976682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.976713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.976889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.976921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.977139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.977169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.977436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.977479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.977695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.977726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.977962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.977993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.978373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.978404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.978616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.978647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.978856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.978887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.979020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.979052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.304 [2024-11-20 12:44:26.979234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.304 [2024-11-20 12:44:26.979265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.304 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.979523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.979555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.979846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.979877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.980157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.980189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.980450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.980482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.980777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.980807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.981056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.981088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.981351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.981382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.981561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.981630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.981842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.981880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.982178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.982210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.982477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.982512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.982701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.982732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.982966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.982998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.983177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.983208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.983399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.983442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.983679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.983711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.983831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.983862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.984030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.984062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.984252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.984285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.984535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.984568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.984862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.984893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.985165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.985197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.985316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.985348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.985536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.985569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.985686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.985717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.985978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.986009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.986217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.986247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.986548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.986735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.986766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.986950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.986982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.987167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.987199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.987477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.987516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.987789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.987998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.988030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.988233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.988264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.988506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.988539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.988779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.305 [2024-11-20 12:44:26.988812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.305 qpair failed and we were unable to recover it. 00:30:21.305 [2024-11-20 12:44:26.988930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.988961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.989198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.989229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.989423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.989456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.989694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.989726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.989906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.989938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.990050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.990081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.990372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.990404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.990548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.990583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.990833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.990865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.991101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.991133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.991253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.991286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.991579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.991612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.991834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.991866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.992112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.992406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.992446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.992713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.992745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.992928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.992960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.993201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.993233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.993437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.993470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.993707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.993739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.994030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.994062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.994253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.994286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.994440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.994708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.994739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.995004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.995037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.995288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.995319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.995492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.995527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.995763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.995795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.995960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.995992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.996234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.996266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.996532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.996565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.996692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.996724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.996918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.996949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.306 [2024-11-20 12:44:26.997131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.306 [2024-11-20 12:44:26.997162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.306 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.997397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.997445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.997648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.997679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.997873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.997905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.998142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.998174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.998418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.998451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.998626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.998658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.998835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.998866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.999129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.999161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.999376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.999408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.999598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.999630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:26.999869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:26.999900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.000137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.000169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.000427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.000460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.000676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.000872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.000905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.001086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.001116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.001399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.001442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.001572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.001605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.001886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.001918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.002180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.002212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.002314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.002343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.002525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.002559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.002807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.002839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.003129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.003160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.003395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.003451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.003640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.003672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.003844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.003876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.004085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.004132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.004329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.004363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.004696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.004729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.004914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.004946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.005154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.005188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.005368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.005401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.005708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.005740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.005996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.006029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.006214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.006246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.006431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.006464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.307 qpair failed and we were unable to recover it. 00:30:21.307 [2024-11-20 12:44:27.006756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.307 [2024-11-20 12:44:27.006788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.007056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.007087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.007278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.007310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.007495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.007535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.007789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.007821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.007941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.007975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.008259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.008291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.008541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.008575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.008759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.008791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.008970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.009003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.009120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.009153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.009329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.009360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.009647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.009680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.009937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.009969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.010140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.010171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.010286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.010317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.010497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.010529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.010802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.010834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.011114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.011146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.011383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.011421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.011598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.011629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.011914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.011945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.012191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.012223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.012474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.012507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.012768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.012799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.013094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.013126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.013328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.013359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.013625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.013657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.013827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.013858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.014121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.014153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.014435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.014505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.014710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.014745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.014973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.015005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.015251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.015283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.015504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.015538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.015743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.015774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.308 [2024-11-20 12:44:27.015960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.308 [2024-11-20 12:44:27.015990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.308 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.016238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.016270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.016430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.016463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.016737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.016768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.016950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.016981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.017234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.017265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.017501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.017533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.017656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.017693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.017972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.018004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.018207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.018239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.018430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.018463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.018599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.018634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.018898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.018929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.019151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.019183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.019364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.019395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.019653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.019685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.019866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.019897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.020075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.020109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.020302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.020334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.020547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.020580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.020787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.020819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.020999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.021038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.021236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.021268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.021535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.021568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.021783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.021815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.021989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.022021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.586 [2024-11-20 12:44:27.022215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.586 [2024-11-20 12:44:27.022249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.586 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.022446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.022478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.022684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.022716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.023005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.023037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.023296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.023328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.023547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.023580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.023754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.023786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.023963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.023993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.024284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.024315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.024552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.024720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.024752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.024949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.024991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.025204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.025237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.025527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.025559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.025733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.025765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.026043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.026074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.026261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.026291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.026532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.026565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.026850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.026882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.027158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.027189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.027399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.027441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.027701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.027733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.027937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.027976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.028124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.028155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.028338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.028370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.028615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.028646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.028828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.028863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.029049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.029081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.029360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.029391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.029605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.029637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.029819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.029853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.030073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.030105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.030287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.030319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.030505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.030538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.030804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.030836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.031109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.031140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.031352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.031384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.031567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.587 [2024-11-20 12:44:27.031599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.587 qpair failed and we were unable to recover it. 00:30:21.587 [2024-11-20 12:44:27.031787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.031818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.032056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.032088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.032287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.032567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.032601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.032844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.032876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.033172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.033203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.033335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.033366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.033621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.033655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.033831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.033863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.034094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.034126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.034391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.034431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.034640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.034671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.034944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.034976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.035148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.035180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.035444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.035477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.035763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.035795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.036002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.036033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.036147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.036178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.036301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.036332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.036449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.036482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.036670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.036703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.036917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.036949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.037184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.037215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.037399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.037456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.037724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.037755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.038066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.038137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.038348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.038385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.038555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.038817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.038851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.039140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.039172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.039370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.039402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.039532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.039567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.039748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.039780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.040050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.040312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.040345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.040597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.040631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.588 [2024-11-20 12:44:27.040833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.588 [2024-11-20 12:44:27.040865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.588 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.041048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.041080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.041264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.041304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.041578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.041611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.041847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.041879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.042052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.042083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.042192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.042223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.042480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.042513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.042693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.042724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.042911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.042943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.043183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.043214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.043419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.043452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.043624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.043654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.043831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.043863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.044059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.044092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.044288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.044320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.044534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.044697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.044728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.044995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.045027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.045195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.045226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.045406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.045448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.045719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.045751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.045924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.045953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.046123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.046155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.046277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.046307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.046588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.046619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.046908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.046939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.047213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.047244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.047482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.047780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.047849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.047980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.048015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.048307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.048340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.048524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.048558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.048749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.048781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.048959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.048991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.049173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.049204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.049422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.049454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.049664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.589 qpair failed and we were unable to recover it. 00:30:21.589 [2024-11-20 12:44:27.049852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.589 [2024-11-20 12:44:27.049884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.049994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.050025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.050290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.050322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.050506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.050541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.050807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.050848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.051138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.051171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.051422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.051455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.051748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.051780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.051901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.051938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.052215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.052405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.052453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.052570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.052601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.052789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.052820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.053005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.053036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.053303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.053333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.053526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.053558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.053823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.053853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.054147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.054179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.054368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.054400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.054625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.054657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.054844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.054875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.055148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.055180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.055468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.055501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.055632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.055662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.055924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.055955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.056161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.056192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.056368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.056401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.056581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.056613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.056900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.056932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.057116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.057148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.057331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.057362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.057572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.057605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.057811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.590 [2024-11-20 12:44:27.057843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.590 qpair failed and we were unable to recover it. 00:30:21.590 [2024-11-20 12:44:27.058029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.058061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.058255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.058289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.058561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.058595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.058841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.058873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.059170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.059202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.059419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.059452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.059736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.059768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.060047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.060079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.060349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.060380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.060658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.060691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.060981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.061013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.061287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.061319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.061529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.061562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.061753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.061785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.062050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.062082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.062378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.062409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.062606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.062637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.062875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.062907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.063088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.063119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.063318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.063350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.063537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.063570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.063834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.063866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.064044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.064076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.064331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.064362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.064612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.064644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.064817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.064849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.065124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.065363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.065394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.065711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.065744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.065936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.065967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.066138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.066170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.066456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.066487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.066731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.066763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.066878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.066933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.067124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.067155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.067397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.591 [2024-11-20 12:44:27.067451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.591 qpair failed and we were unable to recover it. 00:30:21.591 [2024-11-20 12:44:27.067629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.067662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.067934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.067966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.068236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.068273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.068512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.068544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.068725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.068759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.068947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.068978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.069183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.069213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.069464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.069497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.069736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.069768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.069944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.069975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.070247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.070278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.070397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.070436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.070724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.070891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.070923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.071179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.071211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.071406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.071444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.071633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.071665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.071909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.071940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.072209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.072240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.072485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.072518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.072835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.072866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.073155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.073187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.073472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.073505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.073788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.073819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.074105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.074137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.074423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.074455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.074735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.074767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.075055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.075085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.075278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.075312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.075583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.075617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.075742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.075775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.076043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.076075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.076348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.076379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.076616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.076649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.076924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.076956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.077167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.077199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.077429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.077462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.077645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.592 [2024-11-20 12:44:27.077696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.592 qpair failed and we were unable to recover it. 00:30:21.592 [2024-11-20 12:44:27.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.078008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.078225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.078256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.078378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.078410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.078741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.078925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.078966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.079241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.079273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.079382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.079420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.079616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.079648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.079913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.079945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.080241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.080273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.080463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.080496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.080764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.080796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.081065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.081097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.081213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.081246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.081435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.081468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.081735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.081766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.082008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.082153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.082185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.082477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.082510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.082751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.082783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.082952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.082984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.083225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.083256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.083460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.083711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.083742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.083917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.083948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.084156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.084187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.084378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.084410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.084698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.084729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.085028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.085059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.085256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.085287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.085585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.085617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.085845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.086115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.086146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.086439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.086472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.086776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.087068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.087099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.087392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.087434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.087605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.593 [2024-11-20 12:44:27.087634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.593 qpair failed and we were unable to recover it. 00:30:21.593 [2024-11-20 12:44:27.087876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.087910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.088166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.088199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.088316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.088347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.088634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.088668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.088854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.088885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.089068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.089099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.089353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.089392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.089573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.089605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.089902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.089933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.090202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.090234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.090502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.090535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.090664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.090696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.090984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.091016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.091258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.091290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.091576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.091610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.091787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.091819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.091959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.091991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.092265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.092297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.092571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.092604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.092817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.092848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.093125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.093158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.093356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.093387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.093652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.093685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.093901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.093933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.094209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.094241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.094535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.094750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.094783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.095052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.095084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.095325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.095356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.095560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.095593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.095805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.095837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.096138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.096170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.096379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.096433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.096709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.096741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.097018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.097050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.097239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.097270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.097561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.097594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.594 [2024-11-20 12:44:27.097863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.594 [2024-11-20 12:44:27.097895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.594 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.098111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.098143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.098315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.098346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.098559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.098593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.098885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.098916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.099212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.099244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.099519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.099552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.099745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.099776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.099896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.099928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.100201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.100239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.100456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.100493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.100687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.100719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.100970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.101001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.101183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.101215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.101424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.101458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.101650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.101682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.101868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.101900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.102173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.102205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.102509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.102707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.102738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.103000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.103032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.103278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.103310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.103610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.103643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.103911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.103944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.104207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.104239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.104454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.104487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.104760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.104792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.105086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.105118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.105326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.105358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.105633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.105667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.105895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.105926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.106111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.106143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.106389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.106429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.106626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.595 [2024-11-20 12:44:27.106658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.595 qpair failed and we were unable to recover it. 00:30:21.595 [2024-11-20 12:44:27.106933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.106965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.107243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.107274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.107511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.107545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.107742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.107775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.107964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.107996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.108165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.108197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.108393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.108442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.108745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.108777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.108906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.108938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.109209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.109242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.109429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.109461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.109643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.109675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.109929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.109960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.110203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.110235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.110514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.110547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.110754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.110791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.111036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.111067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.111347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.111378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.111562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.111595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.111860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.111891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.112177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.112209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.112447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.112480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.112623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.112655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.112828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.112860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.113085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.113117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.113300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.113331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.113524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.113556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.113733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.113765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.113941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.113973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.114256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.114287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.114427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.114459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.114734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.114765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.115015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.115047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.115222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.115253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.115446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.115479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.115679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.115710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.115904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.115936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.116087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.596 [2024-11-20 12:44:27.116118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.596 qpair failed and we were unable to recover it. 00:30:21.596 [2024-11-20 12:44:27.116377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.116409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.116640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.116672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.116798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.116831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.117100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.117132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.117433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.117467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.117788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.117819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.118076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.118108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.118405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.118446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.118694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.118727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.119039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.119070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.119334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.119365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.119657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.119690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.119945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.119976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.120299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.120331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.120621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.120655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.120934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.120965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.121084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.121121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.121312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.121352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.121571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.121604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.121902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.121934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.122210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.122242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.122539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.122572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.122845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.122876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.123073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.123104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.123367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.123399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.123690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.123723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.123969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.124000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.124218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.124250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.124466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.124500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.124777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.124808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.125132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.125163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.125477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.125511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.125712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.125744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.125991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.126023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.126326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.126628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.126662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.126957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.126988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.597 [2024-11-20 12:44:27.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.597 [2024-11-20 12:44:27.127294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.597 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.127482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.127515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.127699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.127730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.127905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.127937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.128159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.128191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.128308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.128343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.128644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.128677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.128969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.129002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.129148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.129180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.129362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.129394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.129549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.129582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.129713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.129745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.129991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.130022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.130309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.130341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.130563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.130596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.130799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.130831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.131109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.131141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.131342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.131373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.131665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.131698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.131904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.131936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.132115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.132154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.132450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.132483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.132670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.132703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.132896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.132929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.133178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.133209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.133336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.133368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.133653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.133686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.133875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.133906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.134101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.134134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.134316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.134557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.134590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.134893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.134925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.135193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.135225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.135399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.135439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.135724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.135756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.135939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.135973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.136240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.136272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.136482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.136518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-11-20 12:44:27.136774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.598 [2024-11-20 12:44:27.136805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.137066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.137098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.137401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.137443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.137758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.137790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.138071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.138103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.138287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.138322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.138601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.138635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.138852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.138884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.139134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.139166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.139348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.139381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.139693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.139726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.139961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.139996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.140209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.140241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.140446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.140480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.140759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.140791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.140977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.141009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.141275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.141307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.141614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.141647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.141860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.141893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.142167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.142198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.142486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.142520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.142802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.142834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.143032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.143070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.143331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.143364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.143559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.143592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.143867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.143899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.144119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.144151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.144350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.144382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.144665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.144698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.144885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.144919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.145186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.145218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.145496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.145529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.145822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.145853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.146046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.146079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.146363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.146590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.146622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.146951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.146983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.147224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.147256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-11-20 12:44:27.147453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.599 [2024-11-20 12:44:27.147486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.147611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.147644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.147859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.147891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.148098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.148130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.148395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.148443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.148686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.148719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.149019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.149051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.149178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.149210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.149385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.149427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.149717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.149879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.149910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.150238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.150312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.150615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.150651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.150952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.150985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.151292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.151326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.151620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.151658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.151875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.151910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.152101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.152135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.152348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.152383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.152642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.152717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.152936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.152976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.153230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.153264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.153460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.153495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.153779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.153812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.153997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.154031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.154228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.154263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.154472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.154508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.154702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.154736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.154955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.154990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.155184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.155219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.155502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.155537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.155786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.155820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.156008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.156044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-11-20 12:44:27.156331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.600 [2024-11-20 12:44:27.156366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.156585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.156622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.156901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.156936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.157239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.157273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.157458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.157495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.157780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.157816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.158079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.158113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.158357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.158392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.158510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.158545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.158739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.158773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.159031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.159066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.159316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.159351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.159651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.159686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.159899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.159933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.160231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.160266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.160398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.160457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.160658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.160692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.160941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.160976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.161253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.161293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.161547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.161583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.161830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.161864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.162168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.162202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.162760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.162794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.163044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.163078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.163286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.163320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.163602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.163638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.163852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.163889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.164172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.164207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.164448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.164484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.164775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.164811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.165005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.165042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.165248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.165282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.165562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.165599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.165881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.165915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.166194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.166228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.166523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.166558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.166836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.166871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.601 [2024-11-20 12:44:27.167160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.601 [2024-11-20 12:44:27.167196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.601 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.167387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.167444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.167696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.167730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.168012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.168047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.168296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.168331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.168646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.168886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.168920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.169169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.169205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.169483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.169518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.169831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.169866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.170118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.170152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.170447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.170483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.170759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.170794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.170996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.171032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.171336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.171370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.171573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.171608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.171893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.171928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.172209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.172244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.172505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.172541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.172741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.172779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.172981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.173023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.173242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.173278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.173600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.173636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.173865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.174050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.174084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.174283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.174316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.174529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.174565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.174784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.174819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.175100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.175135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.175391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.175436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.175737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.175772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.175991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.176024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.176251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.176285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.176536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.176572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.176864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.176898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.177176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.177211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.177497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.177532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.177790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.177824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.602 [2024-11-20 12:44:27.178015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.602 [2024-11-20 12:44:27.178050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.602 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.178300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.178336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.178559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.178595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.178740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.178774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.178951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.178986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.179264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.179298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.179569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.179604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.179858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.180085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.180120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.180378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.180422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.180742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.180776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.181078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.181112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.181379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.181433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.181654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.181688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.181964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.181998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.182281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.182316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.182604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.182640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.182917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.182952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.183265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.183300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.183574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.183610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.183922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.183956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.184232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.184267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.184537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.184584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.184798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.184832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.185080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.185115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.185307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.185342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.185543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.185578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.185770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.185804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.186055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.186090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.186340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.186374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.186520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.186554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.186755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.186790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.186980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.187210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.187246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.187525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.187561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.187848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.187882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.188091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.188126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.188256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.188291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.603 qpair failed and we were unable to recover it. 00:30:21.603 [2024-11-20 12:44:27.188518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.603 [2024-11-20 12:44:27.188554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.188768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.188804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.189141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.189176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.189372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.189408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.189657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.189691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.189887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.189922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.190164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.190199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.190452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.190487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.190786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.190821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.191112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.191147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.191425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.191460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.191745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.191781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.191962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.191997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.192278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.192311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.192446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.192484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.192779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.192814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.193152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.193440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.193477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.193752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.194009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.194044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.194384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.194426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.194663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.194698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.194973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.195007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.195199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.195366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.195407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.195723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.196040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.196075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.196362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.196396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.196602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.196637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.196841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.196876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.197128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.197161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.197399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.197442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.197744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.197778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.604 [2024-11-20 12:44:27.198073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.604 [2024-11-20 12:44:27.198107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.604 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.198377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.198420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.198709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.198743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.198959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.198992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.199251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.199286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.199503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.199538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.199772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.199960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.199996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.200274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.200308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.200596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.200633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.200851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.200885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.201096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.201130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.201362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.201397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.201602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.201641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.201950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.201990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.202136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.202171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.202448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.202484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.202808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.202846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.203170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.203207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.203499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.203537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.203803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.203838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.203959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.203995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.204201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.204236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.204488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.204524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.204644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.204678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.204892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.204926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.205179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.205213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.205436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.205473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.205754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.205790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.205976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.206010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.206200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.206235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.206434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.206477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.206688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.206723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.206940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.206977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.207185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.207219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.207421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.207457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.207660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.207695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.605 [2024-11-20 12:44:27.207907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.605 qpair failed and we were unable to recover it. 00:30:21.605 [2024-11-20 12:44:27.208165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.208203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.208410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.208454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.208685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.208720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.208993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.209028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.209322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.209356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.209646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.209683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.209926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.209961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.210105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.210140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.210328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.210365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.210561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.210597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.210789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.210828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.211080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.211115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.211244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.211279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.211463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.211499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.211698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.211733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.211859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.211896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.212099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.212134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.212428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.212470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.212649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.212840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.212875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.213103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.213141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.213432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.213467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.213721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.213755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.213982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.214025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.214339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.214376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.214726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.214801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.215013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.215051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.215364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.215400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.215702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.215738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.216000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.216035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.216304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.216339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.216518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.216555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.216809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.216844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.217049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.217093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.217362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.217397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.217590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.217625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.217876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.217910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.606 qpair failed and we were unable to recover it. 00:30:21.606 [2024-11-20 12:44:27.218119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.606 [2024-11-20 12:44:27.218154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.218342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.218378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.218683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.218718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.218996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.219030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.219228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.219262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.219539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.219575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.219769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.219803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.219990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.220025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.220269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.220304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.220540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.220576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.220786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.221014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.221050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.221329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.221364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.221669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.221704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.221983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.222018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.222204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.222240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.222384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.222431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.222642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.222881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.222915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.223190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.223225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.223501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.223552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.223850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.223885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.224060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.224095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.224390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.224439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.224622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.224655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.224851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.224886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.225069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.225105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.225303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.225338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.225570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.225607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.225828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.226057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.226091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.226402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.226446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.226648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.226682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.226930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.226964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.227109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.227144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.227479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.227515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.227700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.227743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.227994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.607 [2024-11-20 12:44:27.228029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-11-20 12:44:27.228159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.228193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.228392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.228434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.228711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.228746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.228926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.228961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.229105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.229140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.229435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.229472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.229792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.229828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.230122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.230157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.230356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.230390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.230660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.230695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.230974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.231009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.231293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.231328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.231541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.231577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.231774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.231809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.232061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.232096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.232395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.232441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.232717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.232751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.232947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.232982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.233157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.233192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.233388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.233434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.233721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.233755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.233965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.234000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.234227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.234262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.234452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.234488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.234683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.234718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.235020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.235075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.235364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.235401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.235642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.235678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.235929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.235964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.236155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.236190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.236465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.236502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.236786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.236822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.237008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.237043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.237241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.237275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.237526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.237562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.237760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.237795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.238080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.238120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.238427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.608 [2024-11-20 12:44:27.238463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-11-20 12:44:27.238721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.238765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.239056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.239091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.239380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.239426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.239672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.239710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.240017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.240052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.240271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.240306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.240613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.240650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.240867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.240902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.241210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.241244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.241452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.241489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.241773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.242027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.242062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.242305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.242600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.242636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.242784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.243131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.243166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.243488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.243525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.243806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.243840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.244102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.244137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.244405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.244449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.244674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.244715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.245033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.245068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.245344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.245380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.245665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.245701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.245890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.245924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.246204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.246238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.246486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.246522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.246818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.246895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.247125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.247163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.247495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.247534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.247688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.247727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.248055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.248090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.248349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.609 [2024-11-20 12:44:27.248384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-11-20 12:44:27.248545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.248580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.248828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.248863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.249130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.249165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.249448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.249485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.249736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.249771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.249919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.249954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.250148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.250187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.250500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.250536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.250806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.250841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.250986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.251020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.251242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.251277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.251482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.251520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.251781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.251816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.252088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.252420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.252455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.252733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.252768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.253011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.253045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.253316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.253351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.253671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.253707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.253989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.254022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.254280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.254314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.254448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.254491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.254742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.254776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.255078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.255113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.255248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.255283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.255553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.255589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.255852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.255887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.256101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.256137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.256263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.256297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.256482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.256517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.256700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.256734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.257014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.257048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.257318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.257352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.257646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.257681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.257956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.257990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.258281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.258317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.258599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.258634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.258898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.258932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.610 [2024-11-20 12:44:27.259230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.610 [2024-11-20 12:44:27.259264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.259534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.259571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.259822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.259856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.260062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.260096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.260350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.260385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.260685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.260721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.260988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.261021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.261209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.261244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.261430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.261466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.261666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.261700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.261890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.261931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.262154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.262189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.262386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.262431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.262637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.262672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.262893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.262929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.263067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.263101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.263293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.263327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.263527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.263566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.263790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.263824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.264016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.264051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.264251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.264287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.264431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.264467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.264735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.264770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.265056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.265089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.265378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.265420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.265645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.265838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.265872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.266143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.266403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.266613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.266648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.266924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.266958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.267093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.267127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.267435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.267470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.267670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.267704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.267926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.267960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.268171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.268206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.268487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.268523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.268779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.268820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.268935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.611 [2024-11-20 12:44:27.268970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.611 qpair failed and we were unable to recover it. 00:30:21.611 [2024-11-20 12:44:27.269190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.269225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.269463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.269696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.269731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.269909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.270089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.270124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.270428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.270463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.270624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.270659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.270937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.270973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.271151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.271185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.271375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.271437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.271637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.271672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.271870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.271904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.272098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.272134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.272331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.272365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.272573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.272611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.272742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.272778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.273076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.273111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.273376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.273409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.273624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.273660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.273993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.274027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.274237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.274272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.274484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.274523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.274706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.274741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.275022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.275057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.275245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.275280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.275496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.275532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.275793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.275828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.276146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.276445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.276482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.276742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.276778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.276911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.276945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.277167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.277201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.277394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.277435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.277709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.277744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.277944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.277979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.278260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.278294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.278517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.278554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.612 [2024-11-20 12:44:27.278692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.612 [2024-11-20 12:44:27.278726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.612 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.278940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.278975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.279276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.279311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.279514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.279550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.279684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.279719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.279980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.280015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.280141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.280513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.280548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.280749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.280784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.281000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.281035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.281168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.281205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.281392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.281437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.281640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.281676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.281975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.282009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.282129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.282164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.282344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.282379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.282698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.282734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.282985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.283019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.283280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.283315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.283619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.283656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.283931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.283965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.284255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.284289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.284515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.284551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.284856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.284891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.285110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.285146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.285343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.285377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.285607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.285642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.285940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.285974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.286207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.286242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.286358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.286399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.286616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.286650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.286851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.286886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.287072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.287106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.287407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.613 [2024-11-20 12:44:27.287745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.613 [2024-11-20 12:44:27.287781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.613 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.288071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.288106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.288340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.288374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.288569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.288604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.288800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.288835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.289041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.289076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.289300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.289334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.289462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.289499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.289683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.289718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.289902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.289938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.290138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.290173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.290448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.290483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.290698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.290733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.290912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.290946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.291143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.291177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.291460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.291495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.291690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.291725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.292034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.292328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.292363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.292510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.292545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.292799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.292833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.293086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.293121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.293236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.293276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.293544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.293580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.293720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.293755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.293858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.293890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.294115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.294150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.294350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.294384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.294651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.294686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.294915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.294950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.295155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.295190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.295319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.295354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.295566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.295601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.295807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.295842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.296086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.296121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.296451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.296487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.296779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.296815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.297119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.297154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.297350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.614 [2024-11-20 12:44:27.297385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.614 qpair failed and we were unable to recover it. 00:30:21.614 [2024-11-20 12:44:27.297663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.297699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.297887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.297922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.298169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.298205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.298398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.298444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.298738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.298774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.298965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.299000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.299184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.299219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.299428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.299465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.299670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.299704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.299903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.299939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.300072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.300107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.300313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.300347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.300657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.300693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.300892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.300927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.301146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.301180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.301401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.301444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.301581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.301616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.301809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.301843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.302028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.302063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.302445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.302481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.302667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.302703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.302830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.302865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.302992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.303026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.303247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.303282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.303563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.303641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.303863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.303900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.304093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.304128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.304345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.304380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.304672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.304708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.304917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.304951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.305196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.305230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.305548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.305866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.306113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.306432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.306468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.306750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.306785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.307070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.307105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.307384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.615 [2024-11-20 12:44:27.307440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.615 qpair failed and we were unable to recover it. 00:30:21.615 [2024-11-20 12:44:27.307712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.307747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.307954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.307989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.308262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.308296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.308433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.308470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.308669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.308703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.308900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.308935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.309117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.309152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.309434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.309469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.309669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.309893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.309928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.310114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.310148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.310406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.310451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.310642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.310676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.310881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.310915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.311023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.311057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.311271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.311429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.311463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.311576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.311611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.311815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.311850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.312127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.312161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.312465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.312501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.312754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.312789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.313096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.313131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.313272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.313309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.313491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.313528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.313707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.313741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.314092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.314169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.314499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.314539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.314680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.314715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.314966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.315001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.315299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.315334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.315520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.315556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.315751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.315785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.316074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.316109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.316307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.316342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.316465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.316498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.316723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.316757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.316958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.616 [2024-11-20 12:44:27.316992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.616 qpair failed and we were unable to recover it. 00:30:21.616 [2024-11-20 12:44:27.317127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.317164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.317298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.317342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.317567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.317603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.317851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.317886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.318011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.318047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.318301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.318335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.318607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.318864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.318897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.319025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.319059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.319165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.319200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.319332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.319366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.319586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.319622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.319896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.319930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.320050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.320085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.320266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.320301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.320492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.320528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.320721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.320755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.320959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.320995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.321184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.321220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.321338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.321372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.321629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.321665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.321960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.321994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.322255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.322289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.322495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.322532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.322776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.322811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.322931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.322966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.323219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.323253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.323530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.323564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.323738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.323818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.324030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.324108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.324252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.324291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.324658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.324697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.324950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.324984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.325113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.325151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.325471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.325592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.325627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.325808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.325843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.617 [2024-11-20 12:44:27.326021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.617 [2024-11-20 12:44:27.326056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.617 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.326261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.326296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.326475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.326511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.326647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.326681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.326886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.326919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.327219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.327253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.327576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.327613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.327918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.327952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.328202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.328237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.328428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.328462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.328712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.328746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.618 [2024-11-20 12:44:27.328875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.618 [2024-11-20 12:44:27.328908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.618 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.329125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.329163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.329423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.329606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.329835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.329869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.330145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.330179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.330374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.330408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.330619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.330785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.330819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.331000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.331034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.331226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.331259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.331444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-11-20 12:44:27.331481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.895 qpair failed and we were unable to recover it. 00:30:21.895 [2024-11-20 12:44:27.331668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.331702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.331888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.331922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.332114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.332148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.332448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.332484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.332683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.332717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.332908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.332945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.333270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.333304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.333592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.333626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.333878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.333913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.334109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.334144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.334265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.334299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.334560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.334596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.334778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.334813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.335002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.335036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.335216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.335250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.335460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.335711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.335746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.335997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.336031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.336219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.336252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.336477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.336513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.336816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.336850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.337134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.337167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.337370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.337424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.337647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.337682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.337865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.337899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.338126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.338161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.338339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.338374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.338582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.338618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.338877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.338911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.339032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.339067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.339320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.339355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.339503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.339539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.339750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.339783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.340082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.340299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.340334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.340582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.340617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.340910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.340945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.341172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-11-20 12:44:27.341206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.896 qpair failed and we were unable to recover it. 00:30:21.896 [2024-11-20 12:44:27.341455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.341490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.341690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.341725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.341919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.342165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.342199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.342523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.342558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.342791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.342825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.343013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.343048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.343302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.343335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.343704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.343743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.344028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.344063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.344207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.344242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.344431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.344473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.344761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.344796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.344993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.345029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.345134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.345169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.345382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.345429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.345734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.345770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.345887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.345921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.346180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.346388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.346444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.346581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.346618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.346823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.346858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.346990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.347024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.347325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.347643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.347679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.347932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.348010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.348298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.348336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.348633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.348669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.348926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.348962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.349270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.349304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.349599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.349635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.349757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.349792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.350101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.350137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.350432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.350468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.350603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.350637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.350845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.350881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.351072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.351110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.351426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.897 [2024-11-20 12:44:27.351462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.897 qpair failed and we were unable to recover it. 00:30:21.897 [2024-11-20 12:44:27.351677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.351722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.352024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.352058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.352243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.352277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.352399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.352444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.352724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.352760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.353057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.353090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.353340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.353374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.353511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.353545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.353795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.353830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.354121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.354154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.354331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.354365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.354510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.354544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.354688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.354723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.355010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.355044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.355237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.355273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.355547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.355584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.355777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.355812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.356072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.356107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.356410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.356452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.356569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.356603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.356852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.356887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.357072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.357106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.357253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.357287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.357563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.357598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.357794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.357827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.358016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.358050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.358176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.358211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.358477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.358555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.358793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.358831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.359033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.359068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.359254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.359289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.359620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.359891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.359925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.360205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.360240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.360452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.360488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.360620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.360655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.360865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.360900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.898 qpair failed and we were unable to recover it. 00:30:21.898 [2024-11-20 12:44:27.361109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.898 [2024-11-20 12:44:27.361143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.361393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.361441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.361551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.361584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.361862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.361907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.362085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.362120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.362334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.362368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.362575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.362611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.362919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.362953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.363068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.363103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.363280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.363317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.363546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.363582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.363895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.363929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.364199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.364233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.364481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.364516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.364716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.364750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.364868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.364902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.365109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.365144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.365350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.365386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.365535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.365570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.365695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.365730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.365988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.366022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.366303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.366339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.366622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.366658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.366880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.367101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.367136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.367270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.367305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.367513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.367550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.367763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.367798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.367973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.368007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.368199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.368234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.368484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.368562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.368876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.368916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.369173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.369207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.369534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.369572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.369766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.369801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.369996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.370030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.370168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.370201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.899 [2024-11-20 12:44:27.370465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.899 qpair failed and we were unable to recover it. 00:30:21.899 [2024-11-20 12:44:27.370666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.370700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.370830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.371051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.371092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.371371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.371407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.371713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.371748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.371943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.371986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.372175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.372208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.372406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.372652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.372686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.372864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.372897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.373029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.373063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.373324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.373359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.373684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.373719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.373998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.374033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.374319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.374353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.374506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.374541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.374796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.374830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.375025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.375059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.375337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.375371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.375584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.375619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.375800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.375833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.375962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.375996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.376171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.376206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.376479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.376513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.376784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.376817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.377165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.377300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.377334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.377449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.377484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.377694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.377728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.377933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.377969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.378286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.378322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.378580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.900 [2024-11-20 12:44:27.378616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.900 qpair failed and we were unable to recover it. 00:30:21.900 [2024-11-20 12:44:27.378880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.378959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.379169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.379206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.379491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.379531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.379653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.379688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.379956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.379991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.380123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.380158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.380452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.380489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.380792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.380825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.381051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.381089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.381367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.381402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.381660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.381694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.381904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.381939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.382171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.382210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.382351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.382386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.382659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.382694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.382817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.382853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.382986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.383021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.383142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.383176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.383356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.383390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.383553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.383589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.383717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.383752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.384032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.384067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.384277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.384311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.384593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.384630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.384910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.384945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.385148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.385183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.385376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.385418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.385671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.385713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.386007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.386042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.386173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.386208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.386464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.386500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.386682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.386716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.386969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.387003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.387210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.387245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.387454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.387490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.387622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.387656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.387859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.387892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.388187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.901 [2024-11-20 12:44:27.388221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.901 qpair failed and we were unable to recover it. 00:30:21.901 [2024-11-20 12:44:27.388438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.388475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.388730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.388761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.388970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.389004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.389227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.389262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.389391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.389434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.389726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.389760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.390012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.390046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.390249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.390284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.390516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.390796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.390831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.391043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.391082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.391292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.391326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.391466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.391504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.391708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.391743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.391925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.391959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.392256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.392291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.392585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.392628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.392829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.392864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.393065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.393099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.393277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.393311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.393504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.393539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.393848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.394128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.394167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.394457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.394492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.394598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.394632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.394901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.394937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.395213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.395247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.395479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.395515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.395832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.395867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.396185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.396219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.396430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.396470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.396696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.396730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.396911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.396946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.397146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.397181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.397459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.397495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.397777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.397811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.398060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.398094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.398398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 12:44:27.398459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.902 qpair failed and we were unable to recover it. 00:30:21.902 [2024-11-20 12:44:27.398699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.398735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.398943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.398977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.399273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.399525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.399561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.399726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.399760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.399871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.399905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.400130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.400165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.400340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.400376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.400589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.400624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.400828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.400862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.401058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.401092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.401343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.401378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.401685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.401720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.401978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.402012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.402293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.402327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.402521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.402558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.402851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.402886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.403062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.403097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.403359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.403393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.403600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.403636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.403899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.403935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.404247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.404284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.404493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.404529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.404728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.404762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.404975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.405010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.405137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.405171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.405427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.405463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.405656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.405691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.405901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.405936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.406120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.406156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.406464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.406500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.406632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.406667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.406913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.406948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.407156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.407191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.407490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.407525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.407848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.407885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.408109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.408146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.408357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.408391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.903 qpair failed and we were unable to recover it. 00:30:21.903 [2024-11-20 12:44:27.408590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 12:44:27.408625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.408882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.408919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.409203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.409239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.409517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.409553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.409748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.409784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.410051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.410089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.410387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.410432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.410541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.410574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.410851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.410892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.411172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.411207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.411490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.411526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.411655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.411690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.411908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.412189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.412224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.412491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.412528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.412676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.412711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.412965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.412999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.413182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.413217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.413539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.413576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.413777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.413812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.414072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.414108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.414285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.414320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.414504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.414540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.414745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.414780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.415039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.415074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.415213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.415247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.415524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.415559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.415790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.416129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.416164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.416476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.416512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.416765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.416800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.417138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.417173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.417351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.417386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.417697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.417733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.904 qpair failed and we were unable to recover it. 00:30:21.904 [2024-11-20 12:44:27.418021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 12:44:27.418057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.418280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.418321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.418526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.418829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.418863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.419129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.419164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.419534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.419571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.419836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.419870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.420152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.420187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.420467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.420503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.420789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.420824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.421048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.421083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.421330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.421365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.421689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.421725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.422046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.422080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.422393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.422435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.422627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.422661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.422975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.423009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.423205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.423240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.423548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.423583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.423711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.423746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.424025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.424059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.424341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.424376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.424661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.424697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.424908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.424942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.425129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.425164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.425358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.425397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.425717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.425753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.426024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.426059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.426264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.426305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.426592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.426628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.426923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.426958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.427155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.427190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.427391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.427434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.427587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.427625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.427908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.427942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.428234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.428269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.428475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.428511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.428821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.905 [2024-11-20 12:44:27.428856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.905 qpair failed and we were unable to recover it. 00:30:21.905 [2024-11-20 12:44:27.429153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.429191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.429449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.429486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.429664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.429700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.429951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.429987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.430275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.430311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.430611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.430647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.430847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.430882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.431156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.431190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.431370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.431405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.431710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.431745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.431879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.431914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.432147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.432182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.432397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.432438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.432776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.432999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.433036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.433252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.433286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.433552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.433588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.433789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.433826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.433973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.434008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.434277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.434312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.434605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.434642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.434782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.434817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.435112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.435148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.435322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.435358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.435564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.435600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.435789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.435824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.436034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.436069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.436284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.436318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.436581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.436832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.436867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.437088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.437123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.437401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.437450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.437577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.437611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.437728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.437763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.437959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.437995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.438213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.438247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.438522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.438559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.438774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.438808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.906 qpair failed and we were unable to recover it. 00:30:21.906 [2024-11-20 12:44:27.439083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.906 [2024-11-20 12:44:27.439118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.439375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.439422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.439622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.439660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.439862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.439897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.440090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.440124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.440312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.440347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.440542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.440578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.440786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.440821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.441003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.441038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.441291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.441325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.441609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.441646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.441981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.442016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.442201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.442238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.442563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.442599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.442814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.442849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.443124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.443159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.443430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.443465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.443760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.443795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.444001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.444037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.444344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.444375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.444576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.444615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.444744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.444776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.445056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.445091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.445370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.445405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.445735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.445771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.445973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.446008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.446314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.446348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.446478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.446515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.446646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.446680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.446882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.446916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.447112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.447147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.447323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.447358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.447688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.447724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.447981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.448016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.448156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.448194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.448342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.448377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.448532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.448569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.448820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.448854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.907 qpair failed and we were unable to recover it. 00:30:21.907 [2024-11-20 12:44:27.449096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.907 [2024-11-20 12:44:27.449131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.449308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.449342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.449560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.449595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.449819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.449853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.449973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.450007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.450336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.450545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.450581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.450789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.450823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.451069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.451104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.451386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.451440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.451635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.451670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.451793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.451828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.452027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.452062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.452165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.452318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.452510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.452545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.452724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.452758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.453062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.453096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.453367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.453402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.453568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.453603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.453792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.453827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.454130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.454165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.454387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.454433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.454716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.454752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.454933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.454968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.455249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.455284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.455463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.455499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.455694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.455728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.455858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.455893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.456088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.456123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.456341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.456378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.456608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.456644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.456937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.456971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.457170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.457205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.457404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.457450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.457644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.457679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.457963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.457998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.458147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.458182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.458439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.908 [2024-11-20 12:44:27.458476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.908 qpair failed and we were unable to recover it. 00:30:21.908 [2024-11-20 12:44:27.458690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.458724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.458981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.459017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.459263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.459297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.459599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.459636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.459969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.460004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.460309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.460343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.460553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.460589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.460898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.460933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.461064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.461100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.461217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.461252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.461451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.461487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.461820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.461902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.462230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.462270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.462495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.462534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.462809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.462850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.463117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.463152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.463361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.463399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.463637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.463673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.463857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.463895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.464195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.464234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.464549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.464585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.464865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.464902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.465216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.465254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.465548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.465586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.465808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.465866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.466095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.466140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.466339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.466374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.466566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.466604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.466750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.466792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.467046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.467083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.467292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.467327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.467534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.467571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.467846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.467887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.909 [2024-11-20 12:44:27.468028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.909 [2024-11-20 12:44:27.468063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.909 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.468259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.468294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.468494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.468530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.468736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.468772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.468969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.469003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.469197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.469233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.469432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.469467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.469720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.469754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.469955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.469990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.470275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.470310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.470592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.470630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.470738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.470773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.470990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.471025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.471347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.471383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.471727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.471769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.471956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.471990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.472181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.472219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.472353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.472390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.472684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.472728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.473009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.473044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.473232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.473268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.473452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.473488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.473662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.473698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.473833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.473868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.474085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.474120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.474390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.474437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.474761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.474796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.475070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.475105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.475352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.475387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.475649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.475685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.475939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.475974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.476196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.476230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.476390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.476443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.476633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.476668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.476926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.476964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.477177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.477212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.477395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.477442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.477701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.477736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.910 [2024-11-20 12:44:27.477899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.910 [2024-11-20 12:44:27.477934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.910 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.478152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.478187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.478473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.478510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.478644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.478680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.478959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.478994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.479194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.479228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.479468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.479504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.479757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.479798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.480001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.480037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.480301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.480337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.480649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.480685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.480866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.480901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.481121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.481156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.481295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.481330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.481525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.481561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.481838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.481871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.482151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.482186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.482403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.482455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.482766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.482801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.483028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.483063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.483261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.483296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.483499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.483539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.483668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.483704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.483979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.484014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.484266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.484300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.484497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.484533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.484723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.484759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.484975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.485009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.485322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.485358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.485667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.485702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.485957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.485992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.486184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.486220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.486505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.486541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.486720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.486755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.487060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.487102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.487313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.487348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.487578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.487615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.487899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.487934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.488124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.911 [2024-11-20 12:44:27.488162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.911 qpair failed and we were unable to recover it. 00:30:21.911 [2024-11-20 12:44:27.488474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.488511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.488766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.488801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.489102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.489137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.489454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.489490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.489702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.489737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.490007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.490042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.490326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.490361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.490643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.490679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.490895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.490931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.491136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.491172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.491382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.491430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.491627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.491662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.491942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.491978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.492212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.492247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.492548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.492585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.492767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.492802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.493003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.493037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.493313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.493348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.493615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.493651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.493840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.493874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.494159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.494194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.494397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.494693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.494727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.495011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.495047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.495175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.495209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.495425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.495464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.495754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.495789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.495989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.496023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.496266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.496301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.496562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.496599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.496802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.496837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.497055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.497090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.497342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.497376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.497715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.497751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.497959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.497993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.498126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.498160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.498354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.498391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.498589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.498625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.912 [2024-11-20 12:44:27.498851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.912 [2024-11-20 12:44:27.498886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.912 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.499063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.499096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.499390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.499435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.499698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.499734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.499986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.500021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.500232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.500267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.500515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.500550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.500856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.500891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.501110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.501144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.501442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.501477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.501690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.501726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.501911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.501946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.502147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.502184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.502449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.502485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.502736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.502770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.503047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.503080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.503371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.503405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.503724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.503758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.503955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.503989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.504125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.504162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.504424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.504460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.504657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.504691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.504867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.504902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.505026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.505060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.505334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.505368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.505661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.505703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.505913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.505948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.506197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.506233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.506535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.506571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.506778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.506813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.506928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.506963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.507242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.507277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.507474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.507508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.507698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.507733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.508064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.508099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.508317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.508601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.508636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.508869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.509076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.913 [2024-11-20 12:44:27.509110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.913 qpair failed and we were unable to recover it. 00:30:21.913 [2024-11-20 12:44:27.509381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.509426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.509732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.509766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.509889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.509923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.510173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.510208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.510507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.510543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.510840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.510874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.511142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.511177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.511394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.511437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.511771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.511805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.511960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.511994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.512298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.512333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.512442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.512477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.512667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.512702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.513004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.513044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.513235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.513269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.513467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.513503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.513686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.513721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.513982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.514017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.514214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.514248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.514530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.514869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.514905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.515225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.515259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.515383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.515424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.515718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.515752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.515927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.515960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.516242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.516276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.516558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.516594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.516790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.516826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.517054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.517088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.517312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.517347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.517562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.517598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.517784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.517818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.518066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.518101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.518348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.914 [2024-11-20 12:44:27.518383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.914 qpair failed and we were unable to recover it. 00:30:21.914 [2024-11-20 12:44:27.518610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.518650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.518905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.518939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.519222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.519257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.519540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.519575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.519790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.519824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.520127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.520163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.520342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.520383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.520676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.520711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.520911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.520946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.521222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.521258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.521542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.521578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.521790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.521825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.522106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.522140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.522389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.522432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.522624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.522659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.522942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.522977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.523193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.523228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.523430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.523466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.523738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.523772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.524032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.524067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.524375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.524409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.524666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.524701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.524827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.524861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.525067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.525100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.525322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.525357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.525650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.525686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.525903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.525938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.526143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.526179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.526435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.526471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.526599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.526633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.526843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.526877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.527179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.527215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.527396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.527440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.527671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.527706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.527842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.527881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.528078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.528112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.528257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.528292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.528505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.528541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.915 [2024-11-20 12:44:27.528716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.915 [2024-11-20 12:44:27.528751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.915 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.528943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.528978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.529277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.529314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.529514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.529551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.529824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.529859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.530037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.530071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.530347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.530382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.530654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.530690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.530886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.530920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.531142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.531183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.531477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.531514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.531779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.531813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.532113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.532147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.532440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.532476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.532729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.532763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.532976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.533011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.533288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.533323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.533467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.533502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.533808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.533843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.534090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.534124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.534431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.534467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.534608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.534643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.534892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.534926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.535130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.535165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.535347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.535382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.535614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.535650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.535758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.535792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.536068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.536103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.536308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.536344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.536577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.536831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.536865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.537125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.537159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.537463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.537499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.537760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.537795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.538068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.538105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.538288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.538323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.538630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.538679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.538816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.538851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.539129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.916 [2024-11-20 12:44:27.539164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.916 qpair failed and we were unable to recover it. 00:30:21.916 [2024-11-20 12:44:27.539451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.539488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.539610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.539644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.539824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.539859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.540139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.540174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.540453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.540488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.540818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.540852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.540968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.541002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.541283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.541317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.541519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.541556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.541812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.541846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.542119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.542154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.542451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.542487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.542704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.542740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.543015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.543050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.543245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.543280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.543558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.543594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.543794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.543829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.544110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.544145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.544453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.544489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.544773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.544807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.545013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.545047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.545330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.545365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.545581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.545617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.545899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.545934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.546125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.546166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.546407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.546452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.546722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.546756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.547008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.547042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.547313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.547348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.547630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.547667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.547960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.547994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.548141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.548176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.548427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.548463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.548706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.548741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.548936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.548972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.549238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.549273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.549458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.549495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.549773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.549809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.917 qpair failed and we were unable to recover it. 00:30:21.917 [2024-11-20 12:44:27.550096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.917 [2024-11-20 12:44:27.550131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.550396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.550440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.550643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.550678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.550855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.550890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.551170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.551206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.551420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.551457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.551598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.551633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.551852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.551887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.552068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.552102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.552286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.552322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.552448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.552484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.552750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.552786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.552920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.552955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.553142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.553178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.553392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.553449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.553729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.553763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.553941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.553976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.554191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.554226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.554496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.554533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.554712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.554998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.555034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.555321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.555355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.555635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.555671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.555945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.555980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.556270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.556304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.556591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.556627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.556771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.556805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.556918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.556953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.557246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.557281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.557591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.557626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.557831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.557866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.558002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.558035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.558289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.558323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.558599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.918 [2024-11-20 12:44:27.558635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.918 qpair failed and we were unable to recover it. 00:30:21.918 [2024-11-20 12:44:27.558836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.558870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.559145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.559180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.559367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.559403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.559694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.559728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.560007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.560042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.560278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.560312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.560613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.560648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.560844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.560879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.561132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.561167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.561464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.561499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.561787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.561821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.562097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.562132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.562432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.562468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.562675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.562709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.563015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.563050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.563311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.563346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.563562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.563597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.563777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.563811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.563998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.564033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.564297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.564332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.564614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.564655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.564840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.564875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.565172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.565207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.565448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.565483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.565676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.565900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.565937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.566212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.566246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.566366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.566400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.566660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.566695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.566960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.566994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.567276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.567310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.567576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.567611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.567889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.567923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.568113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.568148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.568457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.568492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.568616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.568650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.568759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.568794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.569047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.569082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.919 [2024-11-20 12:44:27.569291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.919 [2024-11-20 12:44:27.569328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.919 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.569505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.569541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.569684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.569719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.569909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.569942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.570236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.570271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.570544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.570580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.570853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.570887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.571023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.571059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.571310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.571345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.571559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.571601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.571732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.571768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.572057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.572092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.572360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.572395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.572694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.572728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.572921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.572955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.573206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.573237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.573543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.573574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.573779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.573808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.574096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.574125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.574421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.574453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.574731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.574761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.575047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.575076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.575270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.575301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.575574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.575604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.575883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.575914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.576193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.576223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.576348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.576383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.576587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.576619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.576916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.576948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.577227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.577259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.577483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.577516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.577822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.577854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.578161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.578194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.578475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.578508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.578795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.578830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.579078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.579112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.579429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.579465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.579739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.579773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.579979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.920 [2024-11-20 12:44:27.580014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.920 qpair failed and we were unable to recover it. 00:30:21.920 [2024-11-20 12:44:27.580289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.580323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.580467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.580502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.580711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.580746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.580871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.580906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.581111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.581145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.581277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.581313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.581523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.581835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.581869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.582068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.582103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.582381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.582426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.582549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.582583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.582928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.583005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.583232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.583272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.583561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.583601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.583734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.583768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.584039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.584073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.584381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.584427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.584707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.584741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.584964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.585198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.585233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.585377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.585422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.585723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.585758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.585933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.585967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.586250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.586287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.586512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.586559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.586742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.586777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.587047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.587081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.587292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.587327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.587612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.587647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.587926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.587964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.588286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.588323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.588518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.588554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.588746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.588781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.589085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.589125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.589392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.589443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.589594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.589629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.589917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.589953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.590152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.921 [2024-11-20 12:44:27.590187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.921 qpair failed and we were unable to recover it. 00:30:21.921 [2024-11-20 12:44:27.590448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.590487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.590676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.590714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.590970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.591008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.591292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.591327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.591575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.591611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.591837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.591873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.592142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.592187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.592470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.592507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.592792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.592828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.593020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.593058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.593259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.593295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.593497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.593542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.593768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.593803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.594016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.594051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.594228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.594264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.594487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.594523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.594665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.594702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.594896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.594931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.595212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.595249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.595368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.595666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.595702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.595998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.596035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.596187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.596222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.596487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.596526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.596727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.596762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.596881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.596921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.597057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.597100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.597318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.597356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.597577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.597615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.597807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.597843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.598156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.598194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.598421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.598457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.598668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.598703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.598886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.598922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.599101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.599136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.599365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.599592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.599779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.922 [2024-11-20 12:44:27.599816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.922 qpair failed and we were unable to recover it. 00:30:21.922 [2024-11-20 12:44:27.600075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.600112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.600425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.600463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.600788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.600828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.601110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.601148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.601340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.601543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.601581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.601691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.601735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.601882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.601916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.602039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.602075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.602358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.602396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.602638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.602682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.602805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.602840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.603055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.603090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.603241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.603277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.603477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.603514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.603809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.603848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.604107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.604144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.604437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.604475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.604677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.604712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.604905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.604944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.605229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.605266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.605399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.605444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.605652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.605687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.605883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.605917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.606238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.606276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.606566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.606612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.606906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.606946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.607227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.607264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.607570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.607615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.607837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.607872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.608150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.608184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.608378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.608423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.608544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.923 [2024-11-20 12:44:27.608582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.923 qpair failed and we were unable to recover it. 00:30:21.923 [2024-11-20 12:44:27.608788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.608824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.609111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.609148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.609432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.609477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.609670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.609704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.609815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.609849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.610127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.610163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.610355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.610391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.610639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.610677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.610908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.610943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.611203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.611240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.611539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.611578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.611831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.611870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.612201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.612239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.612511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.612548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.612750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.612788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.613069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.613104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.613339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.613375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.613593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.613631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.613838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.613875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.614009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.614051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.614278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.614316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.614516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.614554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.614705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.614760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.615040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.615075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.615257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.615293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.615506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.615546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.615817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.615852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.616013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.616049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.616301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.616335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.616497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.616533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.616731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.616765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.616959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.616993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.617271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.617305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.617504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.617539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.617656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.617690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.617925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.617959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.618140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.618389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.924 [2024-11-20 12:44:27.618433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.924 qpair failed and we were unable to recover it. 00:30:21.924 [2024-11-20 12:44:27.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.618661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.618941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.618975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.619192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.619227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.619433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.619468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.619595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.619630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.619826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.619863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.620133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.620167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.620427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.620463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.620696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.620731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.620981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.621016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.621268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.621304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.621490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.621532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.621715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.621749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.621874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.621909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.622122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.622155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.622277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.622312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.622533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.622569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.622746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.622781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.622899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.622934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.623151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.623185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.623436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.623471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.623740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.623775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.623953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.623986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.624264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.624299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.624434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.624469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.624763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.624797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.625027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.625061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.625280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.625317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.625523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.625559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.625687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.625720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.626002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.626037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.626181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.626217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.626336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.626371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.626581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.626617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.626859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.626894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.627071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.627107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.627300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.627335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.627512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.627548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.925 qpair failed and we were unable to recover it. 00:30:21.925 [2024-11-20 12:44:27.627726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.925 [2024-11-20 12:44:27.627768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.627904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.627938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.628215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.628250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.628527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.628744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.628779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.629058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.629093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.629367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.629405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.629538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.629574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.630118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.630157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.630461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.630500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.630724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.630759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.630896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.630934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.631170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.631408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.631465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.631677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.631712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.631907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.631942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.632056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.632091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.632386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.632432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.632702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.632737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.633021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.633057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.633400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.633443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.633726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.633761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.633966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.634001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.634331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.634366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.634611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.634647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.634939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.634973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.635275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.635464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.635501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.635782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.635817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.636032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.636066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.636281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.636315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.636548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.636585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.636877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.636912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.637110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.637144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.637428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.637462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.637654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.637688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.637921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.637956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.638238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.638272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.926 qpair failed and we were unable to recover it. 00:30:21.926 [2024-11-20 12:44:27.638523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.926 [2024-11-20 12:44:27.638559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:21.927 qpair failed and we were unable to recover it. 00:30:21.927 [2024-11-20 12:44:27.638776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.212 [2024-11-20 12:44:27.638811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.212 qpair failed and we were unable to recover it. 00:30:22.212 [2024-11-20 12:44:27.639010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.212 [2024-11-20 12:44:27.639045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.212 qpair failed and we were unable to recover it. 00:30:22.212 [2024-11-20 12:44:27.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.212 [2024-11-20 12:44:27.639426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.212 qpair failed and we were unable to recover it. 00:30:22.212 [2024-11-20 12:44:27.639616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.212 [2024-11-20 12:44:27.639656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.212 qpair failed and we were unable to recover it. 00:30:22.212 [2024-11-20 12:44:27.639840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.212 [2024-11-20 12:44:27.639875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.212 qpair failed and we were unable to recover it. 00:30:22.212 [2024-11-20 12:44:27.640095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.212 [2024-11-20 12:44:27.640132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.640324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.640358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.640640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.640677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.640958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.640998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.641156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.641197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.641389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.641433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.641694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.641732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.641928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.641963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.642081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.642124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.642428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.642465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.642755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.642800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.643005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.643043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.643347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.643385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.643579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.643619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.643883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.643920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.644204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.644239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.644523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.644565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.644861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.644899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.645200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.645238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.645529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.645565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.645763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.645797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.645987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.646023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.646303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.646337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.646590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.646626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.646939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.646974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.647280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.647314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.647626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.647662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.647860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.647895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.648014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.648049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.648324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.648359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.648635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.648671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.213 qpair failed and we were unable to recover it. 00:30:22.213 [2024-11-20 12:44:27.648778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.213 [2024-11-20 12:44:27.648813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.649115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.649150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.649429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.649464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.649740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.649775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.649919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.649955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.650176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.650211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.650554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.650597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.650797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.650834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.650991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.651026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.651277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.651312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.651455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.651492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.651786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.651820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.652091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.652125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.652270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.652305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.652500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.652535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.652842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.652876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.653017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.653052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.653241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.653276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.653537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.653573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.653876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.653911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.654110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.654145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.654356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.654392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.214 [2024-11-20 12:44:27.654595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.214 [2024-11-20 12:44:27.654630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.214 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.654825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.654859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.655137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.655171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.655439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.655475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.655770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.655804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.656072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.656107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.656401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.656447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.656709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.656743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.657015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.657049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.657326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.657361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.657557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.657593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.657768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.657809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.657983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.658018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.658283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.658318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.658600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.658636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.658776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.658813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.659002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.659037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.659325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.659359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.659574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.659611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.659810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.659844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.660029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.660064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.660253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.660287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.660525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.660562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.660797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.660832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.661139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.661174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.661419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.661455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.661724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.661759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.661899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.661934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.662184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.662218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.662500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.662536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.662759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.662794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.663058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.663092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.663299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.663334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.663611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.663646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.663778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.663812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.664089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.664123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.664421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.664457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.664750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.664785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.665081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.665121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.665382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.665424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.665603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.665638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.665864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.665899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.666090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.666125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.666437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.666473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.666667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.666702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.666833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.666867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.667053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.667087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.667313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.667349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.215 qpair failed and we were unable to recover it. 00:30:22.215 [2024-11-20 12:44:27.667538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.215 [2024-11-20 12:44:27.667574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.667768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.667803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.668018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.668053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.668280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.668314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.668582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.668618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.668837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.668871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.669152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.669187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.669378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.669421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.669722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.669757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.670014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.670048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.670350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.670386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.670699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.670734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.670932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.670966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.671164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.671198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.671394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.671439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.671722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.671757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.671936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.671970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.672164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.672204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.672490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.672527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.672799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.672833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.673049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.673083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.673352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.673386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.673642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.673677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.673965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.673998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.674192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.674226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.674348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.674383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.674660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.674695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.674896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.674930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.675122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.675157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.675358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.675392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.675565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.675601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.675660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511f60 (9): Bad file descriptor 00:30:22.216 [2024-11-20 12:44:27.676043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.676094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.676387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.676434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.676711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.676746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.676939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.676976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.677117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.677151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.677402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.677447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.677752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.677787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.678018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.678052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.678211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.678246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.678553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.678589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.678788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.678823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.679025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.679060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.679199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.679234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.679455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.679491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.679685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.679720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.679996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.680031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.680235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.680269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.680486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.680522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.680713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.680747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.680932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.681167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.681202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.681441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.681476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.681681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.681716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.681966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.682001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.682130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.216 [2024-11-20 12:44:27.682164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.216 qpair failed and we were unable to recover it. 00:30:22.216 [2024-11-20 12:44:27.682450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.682486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.682787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.682835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.683088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.683123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.683399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.683732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.683767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.684036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.684070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.684269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.684304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.684575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.684611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.684890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.684923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.685212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.685247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.685447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.685483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.685683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.685718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.685899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.685934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.686154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.686189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.686439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.686475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.686767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.686802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.687004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.687039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.687224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.687259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.687511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.687548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.687797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.687832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.688054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.688090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.688345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.688380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.688679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.688714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.688981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.689016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.689213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.689392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.689437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.689704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.689738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.690017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.690051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.690280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.690316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.690651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.690686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.690886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.690921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.691147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.691181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.691461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.691497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.691757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.691792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.692046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.692080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.692355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.692390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.692732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.692767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.692895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.693238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.693273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.693425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.693461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.693657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.693692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.694017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.694057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.694345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.694381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.694614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.694649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.694923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.694958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.695174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.695209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.695338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.695374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.695610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.695646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.695906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.695941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.696157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.696191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.217 qpair failed and we were unable to recover it. 00:30:22.217 [2024-11-20 12:44:27.696326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.217 [2024-11-20 12:44:27.696361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.696608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.696645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.696833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.696871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.697075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.697109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.697304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.697338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.697494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.697537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.697730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.697766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.697963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.697999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.698191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.698226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.698476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.698512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.698765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.698800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.699088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.699123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.699469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.699726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.699762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.700004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.700191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.700227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.700511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.700549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.700738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.700772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.701045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.701082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.701349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.701384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.701604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.701639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.701838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.701873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.702151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.702185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.702478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.702514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.702729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.702763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.702980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.703014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.703205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.703240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.703448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.703484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.703678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.703713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.703932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.703969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.704149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.704184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.704395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.704451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.704668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.704703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.704947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.704982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.705259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.705295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.705500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.705536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.705715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.706016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.706050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.706300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.706334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.706584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.706626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.706931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.706965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.707165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.707198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.707511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.707547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.707676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.707710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.708000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.708036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.708192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.708228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.708448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.708484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.708714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.708749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.708946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.708981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.709200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.709238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.709490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.709526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.709784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.709819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.710123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.710158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.710283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.710319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.710600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.710642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.218 [2024-11-20 12:44:27.710835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.218 [2024-11-20 12:44:27.710870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.218 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.711132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.711389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.711433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.711723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.711757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.711893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.711928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.712135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.712169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.712428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.712464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.712716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.712751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.712958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.712992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.713261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.713296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.713566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.713602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.713820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.713854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.714126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.714161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.714452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.714488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.714761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.714795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.715072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.715106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.715349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.715390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.715562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.715599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.715830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.715865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.716170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.716210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.716346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.716381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.716661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.716697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.716830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.716864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.717082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.717115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.717319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.717354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.717601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.717637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.717907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.717942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.718165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.718199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.718397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.718440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.718695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.718730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.719022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.719056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.719351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.719385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.719645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.719680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.719965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.720000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.720224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.720259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.720536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.720576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.720875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.720912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.721201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.721236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.721455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.721492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.721680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.721718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.721976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.722012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.722190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.722224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.722402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.722450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.722636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.722672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.722877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.723060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.723096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.723395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.723454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.723695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.723729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.724000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.724038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.724237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.724273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.724553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.724589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.724800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.724835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.725033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.725068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.725331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.725366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.725576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.725612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.725730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.725970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.219 [2024-11-20 12:44:27.726012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.219 qpair failed and we were unable to recover it. 00:30:22.219 [2024-11-20 12:44:27.726319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.726354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.726687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.726723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.726954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.726989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.727319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.727353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.727614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.727650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.727907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.727942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.728274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.728309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.728514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.728550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.728763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.728798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.729051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.729084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.729334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.729369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.729568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.729604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.729882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.729917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.730224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.730259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.730367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.730401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.730697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.730732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.731016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.731052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.731241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.731275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.731439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.731474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.731687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.731722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.731974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.732009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.732140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.732180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.732391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.732437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.732633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.732668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.732868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.732903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.733091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.733127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.733362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.733432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.733639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.733675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.733946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.733979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.734172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.734208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.734341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.734375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.734588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.734626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.734748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.734783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.735052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.735088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.735367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.735402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.735607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.735642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.735831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.735865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.735986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.736021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.736301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.736335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.736616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.736652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.736861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.736896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.737092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.737127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.737346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.737380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.737646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.737681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.737881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.737915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.738119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.738153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.738356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.738391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.738588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.738624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.738755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.738789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.738987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.220 [2024-11-20 12:44:27.739021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.220 qpair failed and we were unable to recover it. 00:30:22.220 [2024-11-20 12:44:27.739214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.739249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.739436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.739749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.739784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.740055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.740098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.740360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.740396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.740735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.740769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.741065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.741100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.741249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.741283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.741485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.741522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.741728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.741763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.741986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.742021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.742206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.742240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.742528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.742564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.742693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.742728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.742858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.742893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.743169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.743204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.743491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.743546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.743809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.743844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.744132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.744167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.744461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.744498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.744767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.744802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.745029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.745070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.745378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.745424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.745608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.745643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.745827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.745864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.746065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.746100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.746227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.746262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.746455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.746491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.746598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.746629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.746855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.746890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.747149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.747184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.747308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.747341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.747550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.747586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.747835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.747871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.748014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.748048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.748270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.748305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.748430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.748466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.748583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.748617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.748796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.748831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.749022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.749057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.749249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.749283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.749560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.749595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.749775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.749809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.750004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.750044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.750193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.750481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.750516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.750720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.750755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.751031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.751066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.751355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.751389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.751665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.751701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.751929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.751963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.752270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.752305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.752496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.752531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.752811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.752846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.753106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.753141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.753431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.753466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.221 qpair failed and we were unable to recover it. 00:30:22.221 [2024-11-20 12:44:27.753674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.221 [2024-11-20 12:44:27.753709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.754001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.754036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.754364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.754399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.754599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.754634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.754917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.754952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.755237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.755271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.755477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.755513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.755716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.755751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.755949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.755985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.756101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.756136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.756419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.756455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.756650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.756685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.756966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.757001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.757279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.757604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.757640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.757785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.758019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.758053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.758270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.758304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.758493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.758530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.758778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.758813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.759009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.759044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.759209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.759385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.759429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.759732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.759766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.760019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.760053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.760276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.760310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.760453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.760489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.760773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.760814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.761033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.761067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.761239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.761274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.761541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.761577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.761779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.761814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.761956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.761991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.762269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.762299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.762498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.762530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.762787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.762817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.763027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.763057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.763195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.763226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.763428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.763462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.763736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.763767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.764037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.764067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.764376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.764406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.764714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.764745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.765008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.765038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.765235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.765266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.765549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.765582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.765686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.765718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.765939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.765971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.766246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.766277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.766568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.766600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.766880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.766912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.767204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.767236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.767516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.767552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.767819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.767855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.768144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.768178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.768455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.768492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.222 qpair failed and we were unable to recover it. 00:30:22.222 [2024-11-20 12:44:27.768775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.222 [2024-11-20 12:44:27.768810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.769027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.769061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.769258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.769293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.769576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.769612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.769916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.769951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.770160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.770194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.770492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.770528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.770742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.770777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.770954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.770988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.771239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.771427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.771463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.771770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.771811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.772077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.772111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.772431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.772466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.772731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.772767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.772883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.772918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.773183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.773219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.773503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.773541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.773766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.773803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.773954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.773989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.774270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.774304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.774518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.774554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.774760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.774794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.775022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.775059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.775317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.775354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.775593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.775630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.775935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.775970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.776229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.776264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.776402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.776460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.776715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.776751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.777000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.777035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.777221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.777258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.777545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.777580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.777823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.777865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.778180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.778215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.778409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.778457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.778665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.778700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.778906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.778942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.779261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.779300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.779494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.779530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.779809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.779844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.780032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.780068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.780403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.780653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.780690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.780972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.781007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.781290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.781325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.781607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.781644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.781929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.781967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.782172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.782211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.782527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.223 [2024-11-20 12:44:27.782564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.223 qpair failed and we were unable to recover it. 00:30:22.223 [2024-11-20 12:44:27.782838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.782873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.783106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.783149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.783360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.783395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.783544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.783579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.783883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.783918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.784168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.784203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.784392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.784447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.784720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.784755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.784955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.784990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.785187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.785222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.785426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.785462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.785714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.785749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.785888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.785923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.786035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.786069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.786350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.786383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.786591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.786627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.786829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.786864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.787131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.787165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.787386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.787431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.787738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.787772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.788057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.788092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.788371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.788406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.788608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.788645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.788844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.788879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.789080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.789115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.789405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.789454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.789706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.789740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.790043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.790078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.790261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.790296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.790445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.790702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.790737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.790989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.791025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.791284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.791320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.791533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.791569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.791682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.791718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.791995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.792030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.792291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.792326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.792506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.792543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.792729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.792764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.792943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.792978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.793189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.793223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.793483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.793526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.793676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.793711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.793894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.793929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.794205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.794239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.794458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.794495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.794692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.794727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.795033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.795069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.795357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.795392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.795728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.795763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.796036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.796071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.796363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.796398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.796655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.796689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.796866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.796901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.797121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.224 [2024-11-20 12:44:27.797156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.224 qpair failed and we were unable to recover it. 00:30:22.224 [2024-11-20 12:44:27.797490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.797526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.797813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.797848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.798100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.798135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.798448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.798484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.798778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.798814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.799085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.799119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.799423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.799459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.799719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.799754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.799963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.799998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.800316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.800351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.800644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.800680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.800905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.800942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.801223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.801257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.801458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.801496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.801754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.801788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.802067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.802101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.802319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.802354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.802630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.802664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.802955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.802990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.803112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.803147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.803430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.803465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.803755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.803790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.804052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.804085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.804388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.804431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.804688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.804723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.804965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.805146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.805187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.805385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.805430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.805628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.805663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.805853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.805887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.806064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.806099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.806355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.806389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.806690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.806726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.806938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.806974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.807277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.807311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.807528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.807564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.807762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.807799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.808001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.808035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.808310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.808345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.808628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.808664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.808921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.808955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.809155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.809190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.809381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.809426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.809690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.809724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.809921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.809955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.810240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.810274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.810408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.810463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.810715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.810749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.811039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.811073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.811276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.811310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.811504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.811541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.811748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.811782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.812090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.812125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.812337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.812373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.225 [2024-11-20 12:44:27.812724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.225 [2024-11-20 12:44:27.812759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.225 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.813034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.813068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.813350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.813385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.813601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.813636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.813886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.814205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.814240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.814361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.814396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.814699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.814734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.814994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.815029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.815236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.815271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.815529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.815565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.815864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.815898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.816203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.816243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.816505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.816541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.816859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.816893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.817172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.817206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.817467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.817502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.817724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.817758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.817886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.817922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.818184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.818219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.818498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.818534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.818817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.818852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.819139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.819174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.819478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.819514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.819778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.819813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.820020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.820054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.820319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.820354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.820568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.820604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.820858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.820893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.821190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.821225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.821503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.821540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.821825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.821858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.822150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.822185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.822462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.822498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.822786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.823097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.823133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.823315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.823351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.823635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.823670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.823986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.824021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.824334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.824371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.824660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.824696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.824969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.825003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.825294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.825329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.825604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.825640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.825934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.825969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.826202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.826238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.826518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.826554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.826710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.826746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.826934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.826968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.827270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.827304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.827588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.827624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.827833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.827868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.226 qpair failed and we were unable to recover it. 00:30:22.226 [2024-11-20 12:44:27.828120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.226 [2024-11-20 12:44:27.828161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.828351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.828386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.828667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.828703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.828839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.828876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.829181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.829216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.829424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.829459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.829707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.829742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.829931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.829966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.830270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.830303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.830488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.830524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.830712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.830747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.830957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.830992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.831213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.831248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.831450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.831486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.831764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.831798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.832049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.832084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.832340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.832375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.832591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.832627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.832892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.832926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.833116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.833151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.833429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.833464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.833768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.833803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.834020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.834056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.834243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.834279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.834558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.834595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.834773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.834808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.835085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.835120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.835384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.835431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.835721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.835756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.836032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.836067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.836284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.836319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.836568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.836604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.836804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.836839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.837025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.837060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.837254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.837289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.837544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.837580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.837769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.837804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.838086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.838121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.838285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.838320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.838599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.838635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.838908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.838949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.839156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.839191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.839498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.839534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.839659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.839694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.839822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.839856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.840048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.840082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.840259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.840294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.840601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.840858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.840892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.841163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.841198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.841426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.841462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.841639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.841674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.841963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.841998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.227 qpair failed and we were unable to recover it. 00:30:22.227 [2024-11-20 12:44:27.842122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.227 [2024-11-20 12:44:27.842157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.842349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.842386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.842724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.842760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.842898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.842933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.843209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.843244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.843512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.843549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.843784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.844060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.844094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.844367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.844402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.844660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.844695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.844998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.845033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.845333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.845368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.845668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.845704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.845841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.845876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.846153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.846214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.846504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.846540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.846859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.846894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.847153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.847188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.847474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.847511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.847830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.847865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.848051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.848087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.848346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.848381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.848665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.848700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.848922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.848957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.849260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.849294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.849482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.849518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.849658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.849692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.849956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.849990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.850287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.850321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.850602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.850637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.850887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.850921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.851134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.851169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.851362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.851397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.851608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.851644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.851931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.851965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.852241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.852275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.852551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.852586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.852835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.852869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.853130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.853165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.853461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.853496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.853709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.853744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.853952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.853998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.854287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.854321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.854604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.854640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.854842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.854876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.855151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.855185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.855459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.855495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.855782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.855817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.856097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.856131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.856429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.856465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.856670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.856707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.856990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.857024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.857303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.857337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.857454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.857490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.857774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.857808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.858093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.228 [2024-11-20 12:44:27.858128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.228 qpair failed and we were unable to recover it. 00:30:22.228 [2024-11-20 12:44:27.858430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.858465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.858728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.858762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.859031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.859066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.859260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.859295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.859586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.859621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.859813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.859847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.860109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.860144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.860322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.860355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.860572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.860607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.860792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.860827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.861026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.861061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.861343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.861379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.861586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.861628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.861908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.861942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.862167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.862201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.862485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.862519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.862794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.862828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.863029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.863063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.863278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.863314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.863588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.863623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.863814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.863848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.864109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.864143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.864349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.864384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.864731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.864766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.864968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.865002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.865276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.865310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.865611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.865648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.865904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.865938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.866130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.866164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.866362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.866396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.866617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.866653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.866869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.866903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.867227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.867261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.867552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.867589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.867898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.867932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.868220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.868255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.868534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.868740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.868775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.868958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.868992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.869211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.869252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.869538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.869574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.869770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.869804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.870062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.870096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.870363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.870398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.870603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.870638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.870915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.870950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.871175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.871212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.871430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.871466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.871722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.229 [2024-11-20 12:44:27.871756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-11-20 12:44:27.871889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.871924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.872156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.872191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.872510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.872545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.872689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.872724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.872931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.872967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.873185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.873219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.873496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.873532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.873838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.873872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.874141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.874176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.874375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.874428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.874631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.874666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.874921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.874955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.875154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.875188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.875409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.875472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.875777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.875811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.875918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.875952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.876258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.876293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.876570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.876606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.876897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.876932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.877219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.877254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.877510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.877545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.877738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.877773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.878037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.878155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.878192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.878466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.878503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.878813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.878849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.879109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.879143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.879446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.879482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.879718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.879754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.879958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.879992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.880127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.880162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.880449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.880487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.880759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.880794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.881080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.881115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.881391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.881434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.881718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.881753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.882047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.882082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.882348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.882382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.882595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.882629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.882808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.882843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.883033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.883067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.883324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.883358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.883658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.883695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.883960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.883993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.884228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.884266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.884453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.884490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.884697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.884731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.884931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.884965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.885273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.885308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.885584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.885619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.885829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.885864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.885999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.886033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.886220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.886254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.886443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.886479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.886773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.886807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-11-20 12:44:27.887105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.230 [2024-11-20 12:44:27.887139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.887381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.887425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.887627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.887662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.887937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.887977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.888153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.888187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.888482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.888518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.888728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.888762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.888943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.888977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.889230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.889265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.889479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.889515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.889797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.889832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.890084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.890118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.890420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.890456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.890655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.890690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.890971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.891006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.891216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.891251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.891502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.891537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.891749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.891784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.892050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.892326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.892360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.892577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.892612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.892919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.892953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.893177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.893213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.893408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.893454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.893716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.893749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.893870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.893904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.894206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.894240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.894551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.894874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.894909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.895102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.895137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.895422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.895465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.895591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.895625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.895847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.895881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.896147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.896182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.896483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.896520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.896814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.896849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.897037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.897071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.897247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.897281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.897462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.897497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.897782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.897818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.898010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.898044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.898248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.898282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.898484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.898520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.898713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.898750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.898881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.898916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.899116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.899150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.899273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.899308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.899515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.899551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.899856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.899891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.900069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.900104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.900340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.900374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.900580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.900616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.900897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.900931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.901161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.901196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.901379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.901422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.901697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.901732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.901999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.902033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.902218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.231 [2024-11-20 12:44:27.902253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-11-20 12:44:27.902551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.902588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.902839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.902872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.903002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.903037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.903293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.903327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.903459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.903497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.903723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.903758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.903949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.903983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.904289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.904323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.904597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.904633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.904886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.904921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.905045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.905083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.905340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.905375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.905668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.905704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.906039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.906074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.906292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.906326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.906531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.906566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.906874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.906908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.907195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.907230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.907537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.907573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.907776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.907810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.908085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.908120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.908331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.908365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.908629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.908665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.908947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.908981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.909191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.909225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.909406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.909730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.909765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.909978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.910014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.910144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.910181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.910408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.910454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.910755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.910789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.911016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.911051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.911333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.911367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.911586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.911622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.911742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.911776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.911973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.912007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.912135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.912169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.912365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.912400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.912693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.912728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.913000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.913035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.913220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.913263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.913545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.913581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.913708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.913742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.914032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.914066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.914320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.914354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.914651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.914687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.914823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.914857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.915160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.915194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.915477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.915513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-11-20 12:44:27.915793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.232 [2024-11-20 12:44:27.915827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.916106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.916140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.916435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.916472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.916724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.916758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.916909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.916943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.917197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.917232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.917433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.917469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.917648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.917683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.917932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.917966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.918117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.918152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.918426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.918461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.918714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.918748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.918945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.918980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.919130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.919381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.919439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.919667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.919702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.919899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.919935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.920127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.920162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.920361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.920402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.920537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.920572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.920794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.920828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.921024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.921059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.921301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.921594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.921630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.921887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.921922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.922123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.922158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.922430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.922466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.922665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.922699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.922897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.922932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.923182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.923216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.923517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.923553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.923763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.923797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.923978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.924013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.924209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.924243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.924450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.924485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.924737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.924772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.925068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.925102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.925304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.925346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.925659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.925697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.925932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.925966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.926155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.926189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.926403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.926449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.926658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.926692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.926871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.926906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.927199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.927234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.927489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.927531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.927840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.927874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.928062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.928098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.928325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.928650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.928685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.928958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.928992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.929285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.929319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.929525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.929564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.233 [2024-11-20 12:44:27.929817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.233 [2024-11-20 12:44:27.929852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.233 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.929980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.930014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.930264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.930299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.930575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.930610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.930740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.930775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.930939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.930974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.931332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.931409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.931910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.931950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.932249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.932283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.932561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.932598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.932780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.932815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.932953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.932990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.933267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.933302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.933490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.933525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.933788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.933823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.934008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.934043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.934339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.934374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.934670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.934706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.934981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.935016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.935144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.935191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.935499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.935535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.935802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.935837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.936025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.936059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.936337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.936372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.936639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.936676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.936992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.937027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.937349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.937384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.937671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.937706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.937950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.937985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.938257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.938292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.938539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.938575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.938894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.938929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.939221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.939256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.939531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.939568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.939855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.939890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.940171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.940206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.940489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.940525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.940727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.940762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.941015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.941050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.941277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.941312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.941549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.941585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.941856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.941891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.942007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.942042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.942318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.942352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.942541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.942578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.942840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.942874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.943062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.943138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.943370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.943409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.943631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.943666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.943948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.943983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.944122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.944157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.944344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.944379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.944687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.944729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.944929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.944964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.945268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.945302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.945517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.945552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.945765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.945801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.945994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.946028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.234 [2024-11-20 12:44:27.946304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.234 [2024-11-20 12:44:27.946338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.234 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.946543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.946587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.946738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.946773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.946975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.947009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.947268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.947302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.947559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.947595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.947773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.947807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.947952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.947988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.948124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.948159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.948349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.948387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.948650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.948684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.948911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.948945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.949192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.949226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.949506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.949542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.949749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.949783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.950079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.950115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.950295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.950329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.950589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.950626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.950818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.950854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.950984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.951018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.951270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.951305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.951603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.951638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.951921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.951957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.952231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.952265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.952586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.952621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.952818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.952852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.953037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.953072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.953266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.953300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.953496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.953530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.953724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.953759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.954116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.954150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.954435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.954470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.954685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.954720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.954924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.954957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.955094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.955129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.955348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.955383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.955567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.955602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.955795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.955829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.955946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.235 [2024-11-20 12:44:27.955980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.235 qpair failed and we were unable to recover it. 00:30:22.235 [2024-11-20 12:44:27.956203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.956238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.956440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.956476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.956766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.956808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.957085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.957120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.957399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.957442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.957720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.957756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.957937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.504 [2024-11-20 12:44:27.957971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.504 qpair failed and we were unable to recover it. 00:30:22.504 [2024-11-20 12:44:27.958164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.958199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.958377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.958419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.958548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.958583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.958837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.958872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.959069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.959104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.959355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.959389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.959603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.959638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.959900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.959934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.960197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.960232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.960443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.960480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.960765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.960799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.961040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.961074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.961355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.961390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.961707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.961742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.962019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.962053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.962363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.962397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.962659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.962694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.962994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.963028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.963175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.963210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.963492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.963527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.963726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.963760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.963979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.964256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.964310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.964455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.964493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.964696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.964731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.964879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.964912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.965191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.965225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.965434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.965470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.965598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.965632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.965847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.965881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.966066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.966100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.966378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.966420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.966629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.966664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.966890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.966924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.967136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.967171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.967389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.967432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.967648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.967683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.505 [2024-11-20 12:44:27.967862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.505 [2024-11-20 12:44:27.967898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.505 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.968093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.968333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.968368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.968632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.968667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.968866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.968901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.969093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.969131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.969322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.969357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.969567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.969602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.969858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.969892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.970218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.970252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.970515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.970551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.970777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.970811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.971061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.971102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.971310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.971346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.971547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.971582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.971782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.971817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.972099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.972133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.972264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.972302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.972564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.972600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.972855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.972889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.973085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.973119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.973402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.973444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.973638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.973671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.973811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.973845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.974125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.974159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.974342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.974376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.974591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.974626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.974739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.975071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.975222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.975256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.975448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.975485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.975605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.975639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.975891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.975925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.976125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.976160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.976284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.976318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.976593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.976629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.976746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.976780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.977038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.977071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.506 qpair failed and we were unable to recover it. 00:30:22.506 [2024-11-20 12:44:27.977376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.506 [2024-11-20 12:44:27.977410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.977676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.977926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.977961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.978267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.978302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.978515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.978550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.978829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.978863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.979089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.979123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.979368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.979403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.979704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.979739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.979935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.979969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.980203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.980236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.980534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.980570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.980822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.980855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.981148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.981182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.981312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.981348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.981634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.981784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.981818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.982011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.982046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.982163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.982197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.982327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.982362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.982551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.982586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.982862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.982897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.983103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.983137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.983334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.983369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.983652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.983688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.983819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.983853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.984036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.984069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.984196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.984231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.984486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.984528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.984813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.984848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.985117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.985151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.985454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.985490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.985772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.985805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.986093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.986127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.986323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.986358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.986637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.986672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.986923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.986958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.987118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.987152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.507 qpair failed and we were unable to recover it. 00:30:22.507 [2024-11-20 12:44:27.987350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.507 [2024-11-20 12:44:27.987384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.987671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.987706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.987831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.987865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.988065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.988101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.988291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.988327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.988661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.988698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.988977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.989012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.989219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.989253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.989494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.989530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.989724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.989759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.989962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.989996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.990212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.990246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.990430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.990466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.990795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.990831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.991047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.991084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.991237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.991273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.991562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.991597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.991721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.991754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.992064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.992100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.992378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.992435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.992632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.992666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.992882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.992917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.993211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.993245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.993517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.993553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.993675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.993710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.993984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.994019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.994201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.994236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.994442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.994479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.994585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.994621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.995120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.995154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.995426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.995473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.995668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.995702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.995954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.995988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.996295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.996329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.996561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.996596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.996786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.996820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.997154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.997189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.508 [2024-11-20 12:44:27.997476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.508 [2024-11-20 12:44:27.997512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.508 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.997787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.997820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.998001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.998034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.998260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.998294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.998480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.998515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.998745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.998779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.999002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.999304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.999339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.999542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.999578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:27.999868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:27.999902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.000130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.000165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.000371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.000406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.000608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.000643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.000769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.000803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.001014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.001048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.001319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.001353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.001688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.001724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.001980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.002013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.002216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.002251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.002478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.002514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.002641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.002681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.002982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.003017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.003208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.003243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.003452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.003487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.003748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.003783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.004094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.004129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.004452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.509 [2024-11-20 12:44:28.004487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.509 qpair failed and we were unable to recover it. 00:30:22.509 [2024-11-20 12:44:28.004760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.004794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.005035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.005070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.005198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.005232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.005510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.005545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.005760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.005795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.005973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.006007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.006247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.006282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.006469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.006505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.006791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.006826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.007073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.007108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.007230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.007264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.007454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.007490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.007767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.007800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.007937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.007970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.008247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.008281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.008642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.008920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.008955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.009252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.009287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.009476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.009512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.009762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.009796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.009986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.010026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.010275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.010310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.010592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.010628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.010767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.010802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.011111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.011145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.011325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.011360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.011627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.011663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.011919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.011954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.012308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.012495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.012530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.012710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.012744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.013021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.013055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.013255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.013289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.013540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.013576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.013838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.013872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.014125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.014159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.014343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.014377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.014571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.510 [2024-11-20 12:44:28.014606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.510 qpair failed and we were unable to recover it. 00:30:22.510 [2024-11-20 12:44:28.014888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.014923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.015180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.015215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.015433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.015469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.015675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.015709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.015973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.016008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.016258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.016293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.016495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.016529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.016804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.016838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.017043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.017077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.017352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.017387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.017678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.017714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.017898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.017931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.018160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.018194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.018473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.018509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.018783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.018819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.019036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.019070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.019187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.019222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.019438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.019474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.019727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.019761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.020019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.020054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.020270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.020305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.020586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.020623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.020810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.020844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.021177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.021254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.021575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.021615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.021921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.021957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.022190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.022225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.022443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.022479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.022751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.022786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.023083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.023118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.023397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.023443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.023721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.023757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.023949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.023983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.024237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.024272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.024555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.024591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.024802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.024838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.025118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.025162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.025356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.025390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.511 qpair failed and we were unable to recover it. 00:30:22.511 [2024-11-20 12:44:28.025626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.511 [2024-11-20 12:44:28.025661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.025860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.025895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.026164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.026198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.026381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.026427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.026671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.026705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.026991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.027026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.027280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.027314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.027644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.027680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.027857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.027891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.028141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.028177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.028399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.028446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.028643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.028677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.028879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.028914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.029197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.029392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.029437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.029633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.029667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.029797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.029948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.029985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.030238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.030271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.030538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.030712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.030746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.030965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.031002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.031272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.031307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.031586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.031623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.031745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.031779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.032107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.032184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.032428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.032469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.032743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.032779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.033054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.033090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.033225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.033261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.033460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.033497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.033776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.033810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.034097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.034133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.034409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.034453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.034730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.034765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.035051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.035086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.035299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.035333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.035570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.035605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.512 qpair failed and we were unable to recover it. 00:30:22.512 [2024-11-20 12:44:28.035884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.512 [2024-11-20 12:44:28.035929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.036149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.036184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.036378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.036424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.036638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.036673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.036866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.036903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.037088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.037124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.037304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.037339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.037549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.037585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.037859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.037894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.038081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.038116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.038393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.038440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.038570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.038603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.038907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.038942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.039146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.039181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.039466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.039501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.039678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.039713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.039991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.040027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.040314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.040349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.040626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.040663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.040797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.040831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.041021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.041056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.041163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.041198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.041479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.041515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.041799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.041834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.042118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.042154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.042436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.042471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.042615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.042649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.042852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.042899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.043050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.043185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.043221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.043435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.043471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.043716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.043750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.043938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.043974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.044196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.044233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.044445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.044481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.044651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.044929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.044963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.045219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.513 [2024-11-20 12:44:28.045255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.513 qpair failed and we were unable to recover it. 00:30:22.513 [2024-11-20 12:44:28.045460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.045495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.045684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.045721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.045939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.045975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.046237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.046272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.046577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.046613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.046742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.046776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.047085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.047119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.047394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.047438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.047689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.047915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.047950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.048200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.048236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.048453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.048490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.048708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.048743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.048881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.048915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.049045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.049083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.049282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.049317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.049453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.049489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.049664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.049698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.049882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.049918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.050109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.050145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.050429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.050465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.050675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.050709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.050991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.051025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.051217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.051252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.051444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.051480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.051669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.051704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.051882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.051916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.052044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.052078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.052284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.052320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.052550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.052592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.052898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.052932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.053214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.053248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.053530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.514 [2024-11-20 12:44:28.053568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.514 qpair failed and we were unable to recover it. 00:30:22.514 [2024-11-20 12:44:28.053759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.053794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.054067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.054102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.054452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.054488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.054721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.054756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.054887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.054922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.055204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.055239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.055424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.055459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.055654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.055690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.055969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.056003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.056216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.056251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.056468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.056503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.056707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.056741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.056939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.056974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.057094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.057128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.057322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.057356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.057644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.057680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.057881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.057916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.058161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.058196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.058491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.058527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.058741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.058776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.059124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.059158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.059430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.059465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.059745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.059780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.059993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.060029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.060326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.060361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.060571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.060608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.060910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.060945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.061226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.061261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.061537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.061573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.061859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.061895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.062173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.062493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.062529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.062707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.062741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.062945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.062980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.063161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.063196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.063500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.063536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.063725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.063764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.064064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.515 [2024-11-20 12:44:28.064098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.515 qpair failed and we were unable to recover it. 00:30:22.515 [2024-11-20 12:44:28.064325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.064360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.064686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.064723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.065002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.065038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.065156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.065191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.065467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.065502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.065644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.065682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.065811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.065846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.066105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.066143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.066328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.066362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.066572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.066608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.066941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.066976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.067187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.067224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.067450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.067487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.067738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.067773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.068078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.068112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.068236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.068271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.068497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.068533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.068720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.068754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.068931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.068965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.069182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.069217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.069523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.069559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.069847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.069882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.070058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.070092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.070408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.070449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.070638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.070673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.070945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.070980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.071271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.071305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.071520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.071557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.071775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.071808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.071987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.072022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.072228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.072262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.072443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.072478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.072746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.072781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.073061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.073095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.073296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.073330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.073526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.073562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.073758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.073792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.073991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.516 [2024-11-20 12:44:28.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.516 qpair failed and we were unable to recover it. 00:30:22.516 [2024-11-20 12:44:28.074282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.074323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.074626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.074661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.074936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.074970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.075169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.075204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.075440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.075475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.075750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.075784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.075964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.075998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.076179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.076213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.076441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.076476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.076666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.076701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.077000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.077034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.077300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.077334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.077637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.077673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.077970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.078005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.078307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.078342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.078602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.078637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.078844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.078880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.079142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.079176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.079373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.079407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.079655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.079691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.079843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.079877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.080007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.080041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.080294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.080328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.080508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.080543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.080728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.080762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.080948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.080984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.081241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.081275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.081567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.081825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.081859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.082045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.082081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.082365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.082399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.082610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.082644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.082896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.082931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.083144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.083178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.083445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.083480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.083728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.083762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.083895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.083929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.517 [2024-11-20 12:44:28.084155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.517 [2024-11-20 12:44:28.084189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.517 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.084453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.084492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.084675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.084711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.085019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.085060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.085306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.085340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.085684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.085965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.086000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.086253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.086287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.086492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.086528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.086806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.086840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.087021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.087055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.087335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.087369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.087635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.087672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.087858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.087894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.088075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.088109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.088310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.088344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.088625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.088661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.088939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.088973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.089264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.089299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.089596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.089632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.089842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.089876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.090052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.090087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.090282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.090317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.090588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.090624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.090813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.090846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.091034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.091069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.091316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.091350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.091722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.091757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.091971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.092007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.092223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.092257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.092444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.092481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.092793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.092829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.093028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.093063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.093347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.518 [2024-11-20 12:44:28.093380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.518 qpair failed and we were unable to recover it. 00:30:22.518 [2024-11-20 12:44:28.093692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.093728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.093993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.094027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.094158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.094194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.094379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.094427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.094643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.094678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.094948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.094982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.095160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.095195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.095479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.095515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.095762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.095797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.096090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.096139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.096398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.096455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.096749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.096785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.096976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.097014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.097339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.097374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.097614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.097650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.097959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.097993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.098144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.098179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.098450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.098486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.098789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.098824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.099023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.099058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.099268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.099303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.099556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.099592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.099789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.099824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.100108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.100382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.100424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.100702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.100736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.101011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.101045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.101301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.101336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.101611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.101646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.101926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.101964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.102252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.102288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.102577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.102613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.102888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.102923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.103148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.103184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.103379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.103426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.103712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.103748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.103876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.103914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.104209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.519 [2024-11-20 12:44:28.104245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.519 qpair failed and we were unable to recover it. 00:30:22.519 [2024-11-20 12:44:28.104550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.104586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.104786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.104820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.105090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.105124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.105376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.105436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.105728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.105763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.105962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.105996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.106189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.106224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.106406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.106450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.106702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.106737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.107068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.107102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.107224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.107260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.107559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.107603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.107807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.107843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.107995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.108029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.108317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.108352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.108549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.108584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.108753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.108787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.108969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.109003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.109209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.109245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.109564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.109600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.109780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.109814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.109994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.110028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.110289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.110323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.110621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.110657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.110872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.110906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.111121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.111152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.111341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.111373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.111594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.111625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.111758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.111788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.111971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.112004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.112207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.112237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.112447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.112478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.112668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.112699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.112972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.113022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.113306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.113336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.113609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.113640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.113906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.113936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.114241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.520 [2024-11-20 12:44:28.114273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.520 qpair failed and we were unable to recover it. 00:30:22.520 [2024-11-20 12:44:28.114569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.114601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.114778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.114808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.115054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.115085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.115379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.115409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.115563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.115593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.115860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.115890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.116024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.116053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.116265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.116295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.116543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.116574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.116846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.116876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.117067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.117097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.117307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.117339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.117466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.117498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.117762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.117798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.117989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.118020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.118263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.118295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.118521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.118551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.118809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.118840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.119036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.119067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.119279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.119309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.119579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.119610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.119873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.119904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.120086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.120117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.120317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.120347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.120622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.120654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.120787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.120817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.121007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.121037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.121289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.121320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.121622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.121655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.121869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.121900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.122093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.122123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.122405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.122445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.122583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.122615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.122822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.122852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.123122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.123154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.123366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.123626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.123657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.521 qpair failed and we were unable to recover it. 00:30:22.521 [2024-11-20 12:44:28.123795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.521 [2024-11-20 12:44:28.123827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.124034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.124064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.124324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.124356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.124573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.124606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.124733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.124763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.125024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.125057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.125333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.125364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.125585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.125616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.125869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.125900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.126116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.126148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.126422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.126453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.126635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.126666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.126778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.126811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.126993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.127024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.127275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.127306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.127605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.127907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.127945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.128236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.128267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.128548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.128580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.128852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.128883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.129177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.129208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.129486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.129517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.129802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.129834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.130014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.130046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.130302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.130333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.130447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.130478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.130760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.130791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.131083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.131114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.131306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.131337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.131542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.131828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.131858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.132077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.132108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.132288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.132319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.132515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.132547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.132815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.132846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.133173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.133204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.133458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.133489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.133602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.133633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.522 qpair failed and we were unable to recover it. 00:30:22.522 [2024-11-20 12:44:28.133769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.522 [2024-11-20 12:44:28.133800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.133926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.133957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.134152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.134188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.134462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.134494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.134687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.134718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.134981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.135050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.135231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.135265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.135555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.135592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.135777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.135809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.136043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.136073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.136257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.136288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.136505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.136536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.136813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.136844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.137158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.137190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.137311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.137342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.137554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.137586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.137767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.137799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.138068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.138099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.138371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.138427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.138646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.138679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.138947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.138979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.139223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.139478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.139510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.139691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.139722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.139903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.139935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.140157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.140187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.140447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.140479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.140775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.140806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.141082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.141113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.141305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.141335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.141451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.141482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.141616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.141647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.141929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.141960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.523 qpair failed and we were unable to recover it. 00:30:22.523 [2024-11-20 12:44:28.143616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.523 [2024-11-20 12:44:28.143674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.143889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.143931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.145533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.145585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.145875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.145908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.147985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.148238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.148276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.148535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.148569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.148687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.148717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.149014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.149043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.149307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.149337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.149594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.149625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.149808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.150142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.150212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.150481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.150517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.150775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.150807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.151004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.151036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.151285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.151317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.524 qpair failed and we were unable to recover it. 00:30:22.524 [2024-11-20 12:44:28.151504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.524 [2024-11-20 12:44:28.151537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.427007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.427068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.427347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.427385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.427742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.427775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.427999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.428031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.428316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.428346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.428579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.428612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.793 qpair failed and we were unable to recover it. 00:30:22.793 [2024-11-20 12:44:28.428877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.793 [2024-11-20 12:44:28.428910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.429054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.429094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.429364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.429400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.429659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.429694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.429963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.429995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.430201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.430234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.430545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.430580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.430778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.430811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.430937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.430970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.431086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.431120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.431381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.431424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.431559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.431594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.431849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.431883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.432178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.432210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.432335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.432368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.432512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.432547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.432671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.432705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.432946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.432980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.433116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.433150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.433320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.433354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.433615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.433650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.433770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.433801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.433989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.434023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.434224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.434391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.434434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.434664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.434698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.434830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.434874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.435020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.435052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.435230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.435300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.435494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.435536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.435724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.435758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.436029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.436062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.436302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.436334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.436464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.436499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.436738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.436771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.436908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.436942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.437221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.437255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.437617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.794 [2024-11-20 12:44:28.437654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.794 qpair failed and we were unable to recover it. 00:30:22.794 [2024-11-20 12:44:28.437849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.437883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.438084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.438117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.438296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1105801 Killed "${NVMF_APP[@]}" "$@" 00:30:22.795 [2024-11-20 12:44:28.438332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.438619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.438663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.438767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.438801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.438975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.439010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:22.795 [2024-11-20 12:44:28.439273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.439307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:22.795 [2024-11-20 12:44:28.439532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.439568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.795 [2024-11-20 12:44:28.439844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.439879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.440066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.440100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.795 [2024-11-20 12:44:28.440216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.440250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.795 [2024-11-20 12:44:28.440484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.440520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.440636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.440669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.440860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.440893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.441105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.441139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.441394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.441438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.441628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.441662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.441807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.441840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.442074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.442107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.442277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.442311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.442482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.442518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.442702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.442735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.442932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.442965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.443224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.443257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.443460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.443494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.443672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.443705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.443833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.443867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.444039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.444072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.444251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.444318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.444563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.444606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.444798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.444831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.444948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.444981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.445122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.445155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.445437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.445472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.445649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.445683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.795 qpair failed and we were unable to recover it. 00:30:22.795 [2024-11-20 12:44:28.445950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.795 [2024-11-20 12:44:28.445990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.446112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.446145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.446456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.446594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.446627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.446801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.446836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.446964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.446999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1106797 00:30:22.796 [2024-11-20 12:44:28.447127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.447162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.447342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.447378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1106797 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:22.796 [2024-11-20 12:44:28.447596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.447638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.447780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.447815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1106797 ']' 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.448007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.448041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.796 [2024-11-20 12:44:28.448304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.448340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.796 [2024-11-20 12:44:28.448515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.448551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.448678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.448712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.796 [2024-11-20 12:44:28.448893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.448929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.796 [2024-11-20 12:44:28.449134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.449169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 12:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.796 [2024-11-20 12:44:28.449361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.449396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.449552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.449586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.449828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.449861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.450067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.450101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.450279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.450312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.450482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.450516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.450692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.450726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.450834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.450867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.451006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.451039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.451307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.451341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.451478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.451514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.451639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.451672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.451963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.451998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.452221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.452256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.452434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.452468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.452588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.452622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.452795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.452829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.453053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.796 [2024-11-20 12:44:28.453089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.796 qpair failed and we were unable to recover it. 00:30:22.796 [2024-11-20 12:44:28.453274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.453308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.453437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.453471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.453682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.453716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.453846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.454170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.454205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.454333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.454366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.454503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.454538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.454731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.454767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.455006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.455045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.455238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.455272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.455472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.455507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.455678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.455712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.455884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.455918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.456064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.456098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.456280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.456314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.456502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.456538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.456881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.456915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.457039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.457073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.457380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.457424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.457631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.457664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.457785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.457818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.458059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.458093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.458275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.458309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.458505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.458539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.458708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.458741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.458930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.458964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.459289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.459322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.459506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.459541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.459680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.459713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.459827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.459861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.460168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.460201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.460461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.460496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.460694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.460728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.460867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.460901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.461181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.461214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.461420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.461461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.461700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.461737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.461978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.462012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.797 qpair failed and we were unable to recover it. 00:30:22.797 [2024-11-20 12:44:28.462257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.797 [2024-11-20 12:44:28.462291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.462425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.462461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.462653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.462685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.462867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.462901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.463104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.463138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.463472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.463506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.463693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.463727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.463901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.463935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.464123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.464157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.464298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.464332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.464457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.464492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.464692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.464726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.464915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.464949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.465291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.465324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.465593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.465627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.465815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.465848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.466056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.466089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.466268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.466301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.466487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.466521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.466702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.466736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.466927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.466960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.467196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.467230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.467406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.467449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.467587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.467620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.467742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.467776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.467927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.467961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.468273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.468307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.468451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.468489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.468673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.468707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.468827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.468861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.469048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.469081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.469224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.469258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.798 [2024-11-20 12:44:28.469498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.798 [2024-11-20 12:44:28.469533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.798 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.469662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.469696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.469893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.469927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.470125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.470159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.470387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.470429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.470612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.470644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.470813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.470957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.470991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.471259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.471293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.471549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.471585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.471698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.471731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.471974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.472008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.472274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.472307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.472481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.472516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.472703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.472737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.472950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.472983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.473330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.473363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.473685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.473720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.473996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.474029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.474242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.474275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.474407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.474451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.474643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.474677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.474800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.474833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.475021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.475055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.475191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.475225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.475459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.475495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.475618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.475791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.475824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.475991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.476024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.476335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.476370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.476658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.476694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.476811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.476845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.476981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.477015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.477212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.477252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.477475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.477708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.477741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.477958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.477991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.478214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.478248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.799 [2024-11-20 12:44:28.478461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.799 [2024-11-20 12:44:28.478495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.799 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.478688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.478722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.478822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.478856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.478974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.479008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.479221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.479255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.479439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.479475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.479663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.479696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.479807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.479841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.479974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.480009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.480140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.480173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.480453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.480487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.480729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.480762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.480951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.480986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.481282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.481315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.481550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.481666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.481700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.481831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.481864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.482044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.482077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.482218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.482250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.482515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.482550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.482737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.482770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.482960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.482993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.483200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.483248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.483532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.483568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.483829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.483862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.484105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.484139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.484326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.484359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.484610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.484644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.484784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.484816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.485097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.485131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.485230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.485263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.485458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.485491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.485624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.485657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.485785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.485820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.486009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.486042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.486175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.486209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.486465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.486500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.486765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.486799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.487002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.800 [2024-11-20 12:44:28.487035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.800 qpair failed and we were unable to recover it. 00:30:22.800 [2024-11-20 12:44:28.487325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.487359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.487562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.487596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.487780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.487813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.487951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.487985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.488282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.488316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.488489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.488524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.488660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.488693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.488898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.488931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.489065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.489098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.489337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.489568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.489608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.489796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.489829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.490014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.490047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.490298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.490331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.490487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.490521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.490761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.490794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.490979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.491013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.491140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.491174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.491441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.491476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.491683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.491716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.491904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.491938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.492133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.492166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.492354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.492387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.492670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.492704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.492834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.492867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.493054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.493087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.493202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.493236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.493489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.493523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.493657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.493690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.493867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.493900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.494126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.494159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.494361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.494395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.494645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.494679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.494877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.494910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.495166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.495339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.495372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.495628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.495664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.495886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.801 [2024-11-20 12:44:28.495919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.801 qpair failed and we were unable to recover it. 00:30:22.801 [2024-11-20 12:44:28.496071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.496105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.496361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.496358] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:30:22.802 [2024-11-20 12:44:28.496397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 [2024-11-20 12:44:28.496409] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.496559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.496591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.496721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.496751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.496931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.496961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.497249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.497280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.497479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.497514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.497716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.497749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.497927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.497960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.498242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.498276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.498485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.498519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.498716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.498749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.499023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.499058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.499240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.499274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.499515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.499549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.499730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.499763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.500049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.500082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.500269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.500303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.500490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.500524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.500647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.500684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.500808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.500840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.501021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.501054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.501344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.501386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.501539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.501573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.501697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.501731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.501917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.501956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.502257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.502289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.502549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.502583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.502718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.502751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.502947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.502981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.503268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.503301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.503490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.503524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.503646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.503680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.503895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.503927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.504055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.802 [2024-11-20 12:44:28.504089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.802 qpair failed and we were unable to recover it. 00:30:22.802 [2024-11-20 12:44:28.504348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.504382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.504637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.504671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.504867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.504900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.505088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.505121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.505314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.505348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.505563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.505598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.505731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.505764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.505869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.505902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.506184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.506218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.506445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.506480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.506720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.506753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.506954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.506987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.507256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.507289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.507432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.507466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.507596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.507630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.507801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.507834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.508112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.508145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.508452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.508486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.508635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.508668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.508941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.508974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.509213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.509246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.509537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.509572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.509769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.509803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.509996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.510029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.510151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.510186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.510380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.510440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.510646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.510679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.510887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.510919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.511126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.511160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.511341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.511374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.511586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.511621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.511890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.511929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.512234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.512268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.512368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.512401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.512547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.512581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.512770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.803 [2024-11-20 12:44:28.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.803 qpair failed and we were unable to recover it. 00:30:22.803 [2024-11-20 12:44:28.513029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.513063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.513291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.513540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.513575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.513754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.513787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.513918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.513950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.514196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.514229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.514399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.514442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.514582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.514758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.514792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.515062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.515096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.515208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.515245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.515490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.515630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.515663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.515849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.515883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.516078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.516111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.516364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.516398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.516596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.516630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.516766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.516796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.516962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.516995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.517195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.517228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.517473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.517507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.517763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.517797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.518051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.518091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.518366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.518398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.518620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.518653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.518851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.518884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.519004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.519036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.519220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.519253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.519545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.519580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.519766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.519799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.519989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.520022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.520225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.520258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.520449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.520483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.520621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.520655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.520840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.520873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.521100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.521133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.521327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.521361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.521504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.521537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.804 [2024-11-20 12:44:28.521773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.804 [2024-11-20 12:44:28.521806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.804 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.522100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.522133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.522399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.522444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.522578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.522611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.522739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.522773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.523012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.523045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.523282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.523315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.523560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.523595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.523715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.523748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.524018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.524051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.524225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.524258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.524458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.524499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.524626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.524661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.524880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.524913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.525264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.525297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.525517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.525550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.525687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.525723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.525844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.525876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.526159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.526192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.526422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.526456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.526650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.526683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.526862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.526990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.527022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.527121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.527154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.527377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.527629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.527701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.527935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.527972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.528233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.528267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.528494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.528530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.528659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.528693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.528931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.528964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.529218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.529250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.529376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.529408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.529534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.529568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.529692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.529725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.529904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.529937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.530070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.530356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.805 [2024-11-20 12:44:28.530390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.805 qpair failed and we were unable to recover it. 00:30:22.805 [2024-11-20 12:44:28.530574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.530617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.530788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.530821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.531005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.531038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.531310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.531343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.531535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.531569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.531755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.531789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.531907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.531940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.532264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.532297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.532564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.532598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.532717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.532750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.532962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.532995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.533233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.533268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.533461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.533495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.533715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.533748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.533992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.534026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.534234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.534266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.534373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.534407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.534618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.534653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.534849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.534882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.535112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.535146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.535276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.535310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.535432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.535466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.535724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.535756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.535874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.535912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.536042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.536074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.536271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.536301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.536505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.536539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.536827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.536898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.537115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.537153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.537398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.537446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.537636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.537668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.537770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.537801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.537926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.537958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.538248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.538545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.538579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.538766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.538798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.538907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.538938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.806 [2024-11-20 12:44:28.539185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.806 [2024-11-20 12:44:28.539218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.806 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.539389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.539434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.539553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.539586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.539702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.539744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.541898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.541959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.542293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.542330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.542564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.542598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.542718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.542753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.542943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.542977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.543203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.543236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.543503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.543537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.543668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.543700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.543829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.543862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.544043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.544076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:22.807 [2024-11-20 12:44:28.544256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.807 [2024-11-20 12:44:28.544289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:22.807 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.544408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.544453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.544586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.544618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.544930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.545255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.545289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.545505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.545539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.545651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.545684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.545874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.545907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.546116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.546149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.546267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.546301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.546493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.546527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.546647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.546681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.546894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.546927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.547052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.547086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.547200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.547233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.547470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.547504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.547688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.547759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.547896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.547932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.548112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.548146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.548320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.548353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.548654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.548689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.548806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.549020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.549055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.549245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.083 [2024-11-20 12:44:28.549279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.083 qpair failed and we were unable to recover it. 00:30:23.083 [2024-11-20 12:44:28.549453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.549768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.549802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.549922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.549956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.550138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.550170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.550349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.550382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.550711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.550799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.551029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.551066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.551250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.551283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.551529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.551566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.551697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.551730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.551841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.551875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.552066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.552098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.552353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.552386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.552591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.552625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.552811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.552845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.552962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.552996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.553117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.553150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.553248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.553282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.553514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.553549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.553673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.553707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.553883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.553916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.554097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.554131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.554375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.554409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.554590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.554623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.554777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.555173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.555424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.555459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.555651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.555685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.555863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.555897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.556133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.556405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.556448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.556577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.556611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.556799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.557026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.557060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.557242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.557275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.557386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.557431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.084 [2024-11-20 12:44:28.557563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.084 [2024-11-20 12:44:28.557596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.084 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.557792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.557826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.557955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.557988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.558233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.558266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.558378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.558423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.558608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.558641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.558769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.558802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.558934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.558967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.559141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.559175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.559372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.559439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.559586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.559620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.559860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.559892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.560060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.560096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.560245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.560279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.560453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.560490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.560592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.560626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.560738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.560771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.560897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.560931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.561056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.561090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.561289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.561323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.561512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.561547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.561671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.561708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.561882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.561924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.562162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.562196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.562306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.562339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.562518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.562553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.562672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.562705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.562802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.562836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.563039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.563215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.563249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.563428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.563463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.563629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.563663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.563772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.563807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.564068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.564102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.564367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.564400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.564604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.564639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.564767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.564800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.564979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.085 [2024-11-20 12:44:28.565013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.085 qpair failed and we were unable to recover it. 00:30:23.085 [2024-11-20 12:44:28.565132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.565166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.565362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.565395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.565634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.565669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.565778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.565811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.565924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.565958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.566131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.566164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.566271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.566305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.566424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.566458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.566629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.566664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.566845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.566879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.567064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.567097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.567256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.567317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.567521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.567559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.567735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.567768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.568014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.568048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.568236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.568270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.568454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.568489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.568596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.568630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.568753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.568790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.568989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.569023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.569146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.569180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.569287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.569321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.569526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.569809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.569842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.569945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.569988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.570100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.570133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.570255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.570289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.570392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.570444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.570651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.570685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.570797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.570831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.571008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.571041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.571160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.571382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.571427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.571557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.571589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.571763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.571797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.571902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.571935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.572053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.572087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.086 qpair failed and we were unable to recover it. 00:30:23.086 [2024-11-20 12:44:28.572255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.086 [2024-11-20 12:44:28.572289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.572409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.572455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.572625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.572659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.572781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.572816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.572995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.573029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.573137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.573170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.573270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.573304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.573429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.573464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.573626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.573660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.573846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.573879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.573993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.574027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.574275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.574307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.574401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.574447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.574548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.574571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.087 [2024-11-20 12:44:28.574581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.574697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.574729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.574941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.574975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.575154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.575189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.575362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.575396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.575590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.575624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.575787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.575891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.575925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.576037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.576071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.576254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.576287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.576419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.576454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.576551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.576757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.576789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.576971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.577005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.577109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.577149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.577278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.577312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.577421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.577456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.577696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.577730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.577938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.577971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.578139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.578173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.578297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.087 [2024-11-20 12:44:28.578330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.087 qpair failed and we were unable to recover it. 00:30:23.087 [2024-11-20 12:44:28.578459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.578494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.578670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.578703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.578832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.578867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.579038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.579072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.579198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.579232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.579452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.579487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.579657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.579690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.579948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.579982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.580150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.580183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.580291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.580325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.580448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.580483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.580606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.580640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.580767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.580802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.580905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.580939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.581123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.581157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.581271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.581306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.581559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.581594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.581704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.581737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.581919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.581954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.582058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.582091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.582206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.582240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.582359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.582392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.582510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.582544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.582653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.582894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.582927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.583023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.583056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.583303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.583338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.583447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.583482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.583590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.583623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.583737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.583771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.583889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.583922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.584125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.584158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.584262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.584296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.584420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.584461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.584643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.584677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.088 [2024-11-20 12:44:28.584880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.088 qpair failed and we were unable to recover it. 00:30:23.088 [2024-11-20 12:44:28.584985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.585018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.585192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.585227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.585352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.585385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.585498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.585532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.585718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.585752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.585996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.586030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.586133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.586166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.586436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.586472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.586581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.586616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.586787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.586820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.586942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.586975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.587091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.587125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.587242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.587275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.587390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.587431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.587545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.587579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.587823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.587860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.588104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.588138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.588239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.588272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.588420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.588455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.588558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.588591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.588785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.588818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.589016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.589050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.589159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.589194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.589295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.589328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.589474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.589509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.589706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.589739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.589970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.590004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.590106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.590140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.590308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.590341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.590527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.590561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.590765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.590799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.590985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.591149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.591182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.591302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.591335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.591448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.591483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.591660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.591696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.591805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.591838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.591970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.089 [2024-11-20 12:44:28.592010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.089 qpair failed and we were unable to recover it. 00:30:23.089 [2024-11-20 12:44:28.592134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.592167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.592276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.592312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.592488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.592522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.592778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.592811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.592938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.592971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.593176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.593208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.593315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.593347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.593470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.593504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.593613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.593646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.593746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.593779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.593904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.593937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.594120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.594152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.594256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.594290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.594404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.594480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.594585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.594618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.594794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.594827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.594934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.594969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.595151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.595184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.595284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.595317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.595433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.595469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.595641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.595673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.595851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.595884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.596058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.596091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.596259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.596292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.596399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.596443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.596550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.596584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.596829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.596863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.596963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.596996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.597112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.597145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.597246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.597280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.597461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.597494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.597694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.597727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.597900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.598039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.598072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.598171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.598204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.598446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.598718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.598752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-20 12:44:28.598941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-20 12:44:28.598974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.599233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.599265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.599375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.599602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.599636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.599815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.599848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.600046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.600080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.600191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.600224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.600514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.600549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.600653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.600688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.600786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.600819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.601020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.601053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.601224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.601258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.601368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.601402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.601645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.601686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.601812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.601847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.601963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.601996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.602170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.602205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.602323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.602354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.602510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.602545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.602645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.602678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.602792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.602825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.603028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.603062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.603236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.603270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.603382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.603423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.603530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.603563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.603800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.603833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.604074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.604108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.604299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.604333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.604501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.604536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.604723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.604756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.604879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.604913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.605163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.605196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.605319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.605459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.605493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.605680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.605713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.605822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.605855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.606021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.606055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.606264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.606298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-20 12:44:28.606398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-20 12:44:28.606440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.606538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.606570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.606845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.606954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.606987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.607104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.607143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.607313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.607346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.607528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.607562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.607801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.607836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.607934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.607967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.608082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.608115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.608214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.608249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.608352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.608385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.608487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.608521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.608761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.608794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.608901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.608936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.609059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.609092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.609263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.609297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.609399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.609442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.609621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.609656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.609782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.609816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.609991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.610024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.610152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.610186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.610365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.610398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.610547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.610581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.610781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.610814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.610913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.610946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.611055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.611089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.611263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.611297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.611475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.611510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.611682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.611715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.611953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.611986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.612101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.612135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.612377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.612410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.612584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-20 12:44:28.612617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-20 12:44:28.612798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.612830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.612932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.612965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.613246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.613279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.613489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.613523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.613631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.613666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.613840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.613875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.614047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.614083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.614205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.614239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.614438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.614474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.614605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.614638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.614852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.614892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.615027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.615061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.615183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.615216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.615457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.615492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.615614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.615647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.615760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.615796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.615921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.615959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.616133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.616166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.616334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.616369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.616493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.616528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.616719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.616752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.616990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.616993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.093 [2024-11-20 12:44:28.617022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.093 [2024-11-20 12:44:28.617024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b9[2024-11-20 12:44:28.617029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:30:23.093 only 00:30:23.093 [2024-11-20 12:44:28.617041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.093 [2024-11-20 12:44:28.617047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.617298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.617331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.617509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.617543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.617713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.617746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.617935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.617968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.618153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.618187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.618308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.618341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.618512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.618545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.618656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.618690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.093 [2024-11-20 12:44:28.618615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.618658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:23.093 [2024-11-20 12:44:28.618797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:23.093 [2024-11-20 12:44:28.618798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:23.093 [2024-11-20 12:44:28.618998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.619046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.619191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.619224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.619358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.619391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.619587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.619622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-20 12:44:28.619832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-20 12:44:28.619866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.619991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.620024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.620214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.620247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.620376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.620424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.620615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.620834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.620866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.621039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.621074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.621192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.621226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.621341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.621375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.621570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.621606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.621784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.621817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.621988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.622022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.622209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.622243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.622393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.622466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.622667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.622705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.622896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.622928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.623053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.623086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.623305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.623338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.623523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.623557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.623740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.623773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.623874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.623907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.624085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.624119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.624254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.624288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.624387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.624453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.624571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.624604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.624783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.624819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.624944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.624978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.625192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.625229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.625352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.625387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.625614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.625648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.625837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.625870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.626109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.626141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.626326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.626359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.626481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.626515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.626639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.626672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.626861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.626893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.627078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.627112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.627288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.627321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-20 12:44:28.627501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-20 12:44:28.627536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.627711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.627744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.627929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.627963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.628081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.628113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.628352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.628386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.628713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.628746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.628913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.628946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.629118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.629151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.629334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.629368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.629644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.629679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.629948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.629982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.630096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.630131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.630300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.630333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.630462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.630496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.630601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.630635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.630770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.630935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.630968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.631137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.631171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.631282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.631315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.631433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.631468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.631578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.631612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.631783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.631817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.631931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.631965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.632133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.632167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.632346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.632380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.632518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.632575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.632799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.632834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.633009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.633043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.633297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.633331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.633511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.633548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.633647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.633680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.633802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.633836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.633955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.633989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.634166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.634200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.634460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.634495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.634682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.634716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.634904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.634938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.635072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.635106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.635293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-20 12:44:28.635327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-20 12:44:28.635502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.635537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.635778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.635813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.635992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.636026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.636234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.636283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.636504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.636544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.636729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.636763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.636935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.636969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.637138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.637172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.637349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.637382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.637563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.637598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.637713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.637748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.637937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.637971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.638206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.638242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.638427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.638462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.638634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.638668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.638849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.638882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.639122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.639163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.639351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.639385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.639637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.639671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.639845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.639879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.639977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.640011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.640134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.640170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.640279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.640313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.640451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.640487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.640614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.640648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.640777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.640810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.641077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.641111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.641376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.641409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.641519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.641552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.641771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.641898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.641932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.642138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.642172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.642342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.642377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.642497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.642533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.642720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.642754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.642877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.642910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.643044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-20 12:44:28.643079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-20 12:44:28.643199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.643232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.643355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.643388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.643573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.643608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.643794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.643827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.643996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.644042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.644142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.644175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.644348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.644395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.644606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.644640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.644745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.644778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.644909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.644944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.645186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.645220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.645408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.645453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.645691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.645725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.645829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.645862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.645973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.646008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.646189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.646223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.646328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.646362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.646630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.646664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.646848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.646882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.647065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.647099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.647281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.647316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.647444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.647480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.647655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.647689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.647782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.647817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.647984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.648017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.648139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.648174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.648430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.648466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.648737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.648771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.648955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.648988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.649229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.649263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.649388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.649428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.649605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.649639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.649822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-20 12:44:28.649857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-20 12:44:28.650041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.650079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.650250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.650283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.650380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.650421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.650595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.650629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.650871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.650905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.651014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.651048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.651167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.651200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.651314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.651348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.651513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.651548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.651686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.651873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.651909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.652091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.652126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.652294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.652327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.652513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.652556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.652749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.652784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.653029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.653064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.653163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.653197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.653376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.653421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.653529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.653563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.653754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.653788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.653971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.654006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.654274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.654308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.654489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.654524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.654739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.654775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.654901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.654937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.655139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.655174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.655290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.655323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.655517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.655552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.655795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.655828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.655919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.655951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.656136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.656169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.656451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.656486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.656661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.656695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.656876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.656909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.657105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.657137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.657346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.657380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b68000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.657533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.657590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-20 12:44:28.657719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-20 12:44:28.657753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.657926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.657958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.658058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.658091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.658340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.658374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.658568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.658602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.658880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.659059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.659093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.659205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.659438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.659473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.659586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.659621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.659736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.659770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.659888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.659921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.660114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.660148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.660356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.660389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.660576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.660611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.660723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.660757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.660875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.660914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.661046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.661079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.661263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.661297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.661598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.661633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.661805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.661839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.661968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.662002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.662166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.662199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.662388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.662432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.662556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.662589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.662768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.662802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.662973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.663006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.663126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.663159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.663277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.663312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.663492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.663529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.663725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.663760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.663931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.663965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.664132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.664167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.664298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.664332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.664583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.664776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.664814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.665001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.665036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.665235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.665270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-20 12:44:28.665381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-20 12:44:28.665423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.665617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.665652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.665747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.665781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.665947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.665981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.666086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.666120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.666364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.666586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.666620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.666733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.666771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.666981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.667014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.667298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.667332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.667458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.667493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.667758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.667793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.667909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.667942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.668052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.668181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.668213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.668397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.668439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.668610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.668644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.668768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.668802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.668903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.668945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.669161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.669194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.669382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.669425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.669592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.669624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.669718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.669750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.669927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.669959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.670077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.670109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.670214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.670248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.670488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.670522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.670774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.670807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.670936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.670969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.671074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.671106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.671280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.671313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.671447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.671482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.671666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.671699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.671818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.671851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.672039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.672072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.672262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.672296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.672407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.672449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.672558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-20 12:44:28.672591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-20 12:44:28.672766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.672800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.672923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.672956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.673146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.673180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.673361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.673393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.673591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.673624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.673743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.673775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.673871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.673903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.674091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.674124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.674293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.674327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.674604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.674640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.674814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.674848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.674942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.674976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.675171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.675205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.675301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.675334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.675512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.675548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.675787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.675821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.675987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.676021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.676186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.676220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.676403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.676461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.676708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.676742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.676907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.676946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.677214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.677248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.677437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.677471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.677591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.677624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.677732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.677766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.677915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.677949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.678241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.678274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.678491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.678525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.678776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.678810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.678903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.678936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.679054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.679087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.679220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.679253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.679357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.679390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-20 12:44:28.679625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-20 12:44:28.679660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.679775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.679809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.679954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.680071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.680104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.680284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.680409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.680455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.680631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.680664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.680934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.681160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.681193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.681283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.681316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.681535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.681570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.681689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.681722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.681841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.681874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.682122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.682155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.682421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.682456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.682640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.682673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.682767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.682800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.682975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.683009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.683185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.683219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.683348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.683381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.683557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.683590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.683764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.683798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.683908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.683941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.684049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.684082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.684180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.684214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.684315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.684348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.684521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.684556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.684744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.684784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.685031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.685068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.685220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.685464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.685498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.685594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.685628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.685723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.685768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.685953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.685986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.686079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.686112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.686233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.686266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.686392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.686434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.686681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.686715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-20 12:44:28.686941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-20 12:44:28.686975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.687089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.687122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.687302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.687335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.687540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.687575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.687707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.687741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.687860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.687894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.688099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.688132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.688236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.688270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.688450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.688485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.688698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.688731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.688984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.689017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.689200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.689233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.689363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.689397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.689601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.689634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.689740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.689902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.689935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.690040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.690074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.690284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.690318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.690465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.690663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.690696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.690905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.690938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.691062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.691095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.691271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.691304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.691427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.691461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.691568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.691601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.691728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.691760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.692001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.692034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.692144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.692177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.692282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.692314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.692472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.692512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.692681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.692715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.692896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.693105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.693138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.693265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.693298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.693428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.693464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.693641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.693674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.693852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.693885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-20 12:44:28.694137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-20 12:44:28.694171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.694344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.694376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.694622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.694656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.694777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.694811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.694921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.694954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.695164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.695197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.695389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.695435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.695668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.695906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.695939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.696128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.696161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.696336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.696371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.696549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.696584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.696687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.696719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.696897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.696934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.697097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.697130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.697233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.697266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.697381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.697617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.697651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.697752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.697785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.697891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.697925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.698102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.698135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.698271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.698304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.698426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.698461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.698646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.698679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.698789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.698823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.698922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.698954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.699054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.699089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.699340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.699373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.699595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.699629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.699737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.699770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.699949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.699982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.700086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.700118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.700304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.700343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.700541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.700575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.700764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.700796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.700963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.700996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.701110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.701143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.701354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.701387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-20 12:44:28.701516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 12:44:28.701549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.701732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.701767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.701943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.701977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.702149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.702183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.702364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.702397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.702503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.702536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.702639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.702672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.702843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.702875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.702999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.703033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.703272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.703305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.703486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.703519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.703694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.703726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.703903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.703936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.704143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.704176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.704345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.704378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b60000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.704530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.704595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2b5c000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.704729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.704786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.704896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.704930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.705051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.705194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.705228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.705345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.705378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 A controller has encountered a failure and is being reset. 00:30:23.105 [2024-11-20 12:44:28.705581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.705617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.705728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.705761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.705944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.705980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.706243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.706277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.706466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.706502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.706766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.706799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.706985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.707018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.707152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.707185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.707362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.707397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.707500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.707532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.707726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.707759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.707947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.707980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.708192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.708226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.708433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.708474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.708719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.708753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.708869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.708902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.709071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.709104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-20 12:44:28.709271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 12:44:28.709305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.709487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.709521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.709700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.709734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.709997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.710142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.710176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.710284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.710317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.710502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.710538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.710734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.710768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.710871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.710903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.711076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.711109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504020 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-20 12:44:28.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 12:44:28.711433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1511f60 with addr=10.0.0.2, port=4420 00:30:23.106 [2024-11-20 12:44:28.711464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511f60 is same with the state(6) to be set 00:30:23.106 [2024-11-20 12:44:28.711499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511f60 (9): Bad file descriptor 00:30:23.106 [2024-11-20 12:44:28.711528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:23.106 [2024-11-20 12:44:28.711550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:23.106 [2024-11-20 12:44:28.711573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:23.106 Unable to reset the controller. 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 Malloc0 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 [2024-11-20 12:44:29.390390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 [2024-11-20 12:44:29.419296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.674 12:44:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1106081 00:30:24.241 Controller properly reset. 00:30:29.516 Initializing NVMe Controllers 00:30:29.516 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:29.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:29.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:29.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:29.517 Initialization complete. Launching workers. 00:30:29.517 Starting thread on core 1 00:30:29.517 Starting thread on core 2 00:30:29.517 Starting thread on core 3 00:30:29.517 Starting thread on core 0 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:29.517 00:30:29.517 real 0m11.260s 00:30:29.517 user 0m38.004s 00:30:29.517 sys 0m5.232s 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.517 ************************************ 00:30:29.517 END TEST nvmf_target_disconnect_tc2 00:30:29.517 ************************************ 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.517 rmmod nvme_tcp 00:30:29.517 rmmod nvme_fabrics 00:30:29.517 rmmod nvme_keyring 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1106797 ']' 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1106797 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1106797 ']' 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1106797 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106797 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106797' 00:30:29.517 killing process with pid 1106797 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1106797 00:30:29.517 12:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1106797 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.517 12:44:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.423 12:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.423 00:30:31.423 real 0m20.234s 00:30:31.423 user 1m4.836s 00:30:31.423 sys 0m10.661s 00:30:31.423 12:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.423 12:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:31.423 ************************************ 00:30:31.423 END TEST nvmf_target_disconnect 00:30:31.423 ************************************ 00:30:31.423 12:44:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:31.423 00:30:31.423 real 6m13.103s 00:30:31.423 user 11m42.211s 00:30:31.423 sys 2m1.797s 00:30:31.423 12:44:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.423 12:44:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.423 ************************************ 00:30:31.423 END TEST nvmf_host 00:30:31.423 ************************************ 00:30:31.423 12:44:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:31.423 12:44:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:31.423 12:44:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:31.423 12:44:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.423 12:44:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.423 12:44:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.682 ************************************ 00:30:31.682 START TEST nvmf_target_core_interrupt_mode 00:30:31.682 ************************************ 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:31.682 * Looking for test storage... 00:30:31.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.682 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.683 --rc genhtml_branch_coverage=1 00:30:31.683 --rc genhtml_function_coverage=1 00:30:31.683 --rc genhtml_legend=1 00:30:31.683 --rc geninfo_all_blocks=1 00:30:31.683 --rc geninfo_unexecuted_blocks=1 00:30:31.683 00:30:31.683 ' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.683 --rc genhtml_branch_coverage=1 00:30:31.683 --rc genhtml_function_coverage=1 00:30:31.683 --rc genhtml_legend=1 00:30:31.683 --rc geninfo_all_blocks=1 00:30:31.683 --rc geninfo_unexecuted_blocks=1 00:30:31.683 00:30:31.683 ' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.683 --rc genhtml_branch_coverage=1 00:30:31.683 --rc genhtml_function_coverage=1 00:30:31.683 --rc genhtml_legend=1 00:30:31.683 --rc geninfo_all_blocks=1 00:30:31.683 --rc geninfo_unexecuted_blocks=1 00:30:31.683 00:30:31.683 ' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.683 --rc genhtml_branch_coverage=1 00:30:31.683 --rc genhtml_function_coverage=1 00:30:31.683 --rc genhtml_legend=1 00:30:31.683 --rc geninfo_all_blocks=1 00:30:31.683 --rc geninfo_unexecuted_blocks=1 00:30:31.683 00:30:31.683 ' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.683 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.943 ************************************ 00:30:31.943 START TEST nvmf_abort 00:30:31.943 ************************************ 00:30:31.943 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:31.943 * Looking for test storage... 00:30:31.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.943 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:31.943 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:31.943 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:31.943 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.944 --rc genhtml_branch_coverage=1 00:30:31.944 --rc genhtml_function_coverage=1 00:30:31.944 --rc genhtml_legend=1 00:30:31.944 --rc geninfo_all_blocks=1 00:30:31.944 --rc geninfo_unexecuted_blocks=1 00:30:31.944 00:30:31.944 ' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.944 --rc genhtml_branch_coverage=1 00:30:31.944 --rc genhtml_function_coverage=1 00:30:31.944 --rc genhtml_legend=1 00:30:31.944 --rc geninfo_all_blocks=1 00:30:31.944 --rc geninfo_unexecuted_blocks=1 00:30:31.944 00:30:31.944 ' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.944 --rc genhtml_branch_coverage=1 00:30:31.944 --rc genhtml_function_coverage=1 00:30:31.944 --rc genhtml_legend=1 00:30:31.944 --rc geninfo_all_blocks=1 00:30:31.944 --rc geninfo_unexecuted_blocks=1 00:30:31.944 00:30:31.944 ' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.944 --rc genhtml_branch_coverage=1 00:30:31.944 --rc genhtml_function_coverage=1 00:30:31.944 --rc genhtml_legend=1 00:30:31.944 --rc geninfo_all_blocks=1 00:30:31.944 --rc geninfo_unexecuted_blocks=1 00:30:31.944 00:30:31.944 ' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.944 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.945 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:30:38.515 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:30:38.515 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:30:38.515 Found net devices under 0000:1a:00.0: cvl_0_0 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:30:38.515 Found net devices under 0000:1a:00.1: cvl_0_1 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:30:38.515 00:30:38.515 --- 10.0.0.2 ping statistics --- 00:30:38.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.515 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:38.515 00:30:38.515 --- 10.0.0.1 ping statistics --- 00:30:38.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.515 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1111697 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1111697 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1111697 ']' 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.515 12:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:38.515 [2024-11-20 12:44:43.871289] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.515 [2024-11-20 12:44:43.872145] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:30:38.515 [2024-11-20 12:44:43.872181] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.515 [2024-11-20 12:44:43.946518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:38.515 [2024-11-20 12:44:43.984889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.515 [2024-11-20 12:44:43.984924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.515 [2024-11-20 12:44:43.984930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.515 [2024-11-20 12:44:43.984936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.515 [2024-11-20 12:44:43.984940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.515 [2024-11-20 12:44:43.986426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.515 [2024-11-20 12:44:43.986526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.515 [2024-11-20 12:44:43.986527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:38.515 [2024-11-20 12:44:44.050738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.515 [2024-11-20 12:44:44.051435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:38.515 [2024-11-20 12:44:44.051907] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.515 [2024-11-20 12:44:44.052004] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:39.081 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.081 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:39.081 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.081 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.081 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 [2024-11-20 12:44:44.715347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 Malloc0 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 Delay0 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 [2024-11-20 12:44:44.807246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.082 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:39.340 [2024-11-20 12:44:44.977534] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:41.873 Initializing NVMe Controllers 00:30:41.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:41.873 controller IO queue size 128 less than required 00:30:41.873 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:41.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:41.873 Initialization complete. Launching workers. 00:30:41.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42401 00:30:41.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42458, failed to submit 66 00:30:41.873 success 42401, unsuccessful 57, failed 0 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.873 rmmod nvme_tcp 00:30:41.873 rmmod nvme_fabrics 00:30:41.873 rmmod nvme_keyring 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1111697 ']' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1111697 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1111697 ']' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1111697 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1111697 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1111697' 00:30:41.873 killing process with pid 1111697 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1111697 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1111697 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.873 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.778 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.778 00:30:43.778 real 0m12.053s 00:30:43.778 user 0m10.993s 00:30:43.778 sys 0m5.859s 00:30:43.778 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.778 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.778 ************************************ 00:30:43.778 END TEST nvmf_abort 00:30:43.778 ************************************ 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.037 ************************************ 00:30:44.037 START TEST nvmf_ns_hotplug_stress 00:30:44.037 ************************************ 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:44.037 * Looking for test storage... 00:30:44.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.037 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.038 --rc genhtml_branch_coverage=1 00:30:44.038 --rc genhtml_function_coverage=1 00:30:44.038 --rc genhtml_legend=1 00:30:44.038 --rc geninfo_all_blocks=1 00:30:44.038 --rc geninfo_unexecuted_blocks=1 00:30:44.038 00:30:44.038 ' 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.038 --rc genhtml_branch_coverage=1 00:30:44.038 --rc genhtml_function_coverage=1 00:30:44.038 --rc genhtml_legend=1 00:30:44.038 --rc geninfo_all_blocks=1 00:30:44.038 --rc geninfo_unexecuted_blocks=1 00:30:44.038 00:30:44.038 ' 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.038 --rc genhtml_branch_coverage=1 00:30:44.038 --rc genhtml_function_coverage=1 00:30:44.038 --rc genhtml_legend=1 00:30:44.038 --rc geninfo_all_blocks=1 00:30:44.038 --rc geninfo_unexecuted_blocks=1 00:30:44.038 00:30:44.038 ' 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.038 --rc genhtml_branch_coverage=1 00:30:44.038 --rc genhtml_function_coverage=1 00:30:44.038 --rc genhtml_legend=1 00:30:44.038 --rc geninfo_all_blocks=1 00:30:44.038 --rc geninfo_unexecuted_blocks=1 00:30:44.038 00:30:44.038 ' 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.038 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:44.298 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.299 12:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:30:50.878 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:30:50.878 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:30:50.878 Found net devices under 0000:1a:00.0: cvl_0_0 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:30:50.878 Found net devices under 0000:1a:00.1: cvl_0_1 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.878 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:30:50.879 00:30:50.879 --- 10.0.0.2 ping statistics --- 00:30:50.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.879 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:30:50.879 00:30:50.879 --- 10.0.0.1 ping statistics --- 00:30:50.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.879 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1115889 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1115889 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1115889 ']' 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.879 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.879 [2024-11-20 12:44:55.965958] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.879 [2024-11-20 12:44:55.966832] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:30:50.879 [2024-11-20 12:44:55.966865] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.879 [2024-11-20 12:44:56.044543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.879 [2024-11-20 12:44:56.084256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.879 [2024-11-20 12:44:56.084288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.879 [2024-11-20 12:44:56.084295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.879 [2024-11-20 12:44:56.084301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.879 [2024-11-20 12:44:56.084306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.879 [2024-11-20 12:44:56.085623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.879 [2024-11-20 12:44:56.085755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.879 [2024-11-20 12:44:56.085757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.879 [2024-11-20 12:44:56.151589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.879 [2024-11-20 12:44:56.152438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.879 [2024-11-20 12:44:56.152587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.879 [2024-11-20 12:44:56.152761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:51.150 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:51.445 [2024-11-20 12:44:56.990469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.445 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:51.704 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.704 [2024-11-20 12:44:57.334954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.704 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.963 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:52.222 Malloc0 00:30:52.222 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:52.222 Delay0 00:30:52.222 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.481 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:52.739 NULL1 00:30:52.739 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:52.739 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1116394 00:30:52.739 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:52.739 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:52.739 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.997 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.257 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:53.257 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:53.257 true 00:30:53.515 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:53.515 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.515 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.774 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:53.774 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:54.032 true 00:30:54.032 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:54.032 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.968 Read completed with error (sct=0, sc=11) 00:30:55.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.227 12:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:55.227 12:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:55.227 12:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:55.485 true 00:30:55.485 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:55.485 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.744 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.003 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:56.003 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:56.003 true 00:30:56.003 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:56.003 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.379 12:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.379 12:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:57.379 12:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:57.638 true 00:30:57.638 12:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:57.638 12:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.575 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.575 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:58.575 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:58.834 true 00:30:58.834 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:58.834 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.093 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.351 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:59.351 12:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:59.351 true 00:30:59.351 12:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:30:59.351 12:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.727 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.727 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:00.727 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:00.986 true 00:31:00.986 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:00.986 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:01.923 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.923 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:01.923 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:02.182 true 00:31:02.182 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:02.182 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.441 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.699 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:02.699 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:02.699 true 00:31:02.699 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:02.699 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:02.959 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:02.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:02.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.251 [2024-11-20 12:45:08.788281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.788981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.789992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.790029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.251 [2024-11-20 12:45:08.790066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.790875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.791773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.792975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.793967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.252 [2024-11-20 12:45:08.794600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.794640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.794687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.794727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.794766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.794929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.794964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.795981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.796997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.797996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.798991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.799034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.799076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.253 [2024-11-20 12:45:08.799128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.799961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.800513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.801968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.802976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.803847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 [2024-11-20 12:45:08.804017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.254 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.254 [2024-11-20 12:45:08.804064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.804988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.805991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.806991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.807970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.255 [2024-11-20 12:45:08.808392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.808992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.809796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.810969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.811980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.256 [2024-11-20 12:45:08.812386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.812897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.813990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.814987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.815967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.816982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.257 [2024-11-20 12:45:08.817430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.817965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.818787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.819960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.820961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.821984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.822145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.258 [2024-11-20 12:45:08.822187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.822228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.822266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.822305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.822342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.822380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.823988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.824961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.825991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.826973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.259 [2024-11-20 12:45:08.827289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:03.260 [2024-11-20 12:45:08.827493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:03.260 [2024-11-20 12:45:08.827768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.827975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.828997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.829999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.830989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.831027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.831065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.831104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.831144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.260 [2024-11-20 12:45:08.831176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.831973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.832984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.833988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.834973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.835969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.261 [2024-11-20 12:45:08.836333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.836980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.837999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.838990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.839993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.262 [2024-11-20 12:45:08.840825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.840866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.840910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.840958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.841970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.842996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.843998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.844963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.845982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.846021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.846059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.846112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.846150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.846190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.263 [2024-11-20 12:45:08.846234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.846961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.847966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.848981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.849987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.264 [2024-11-20 12:45:08.850583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.850627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.850683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.265 [2024-11-20 12:45:08.851420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.851969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.852969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.853980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.854969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.855009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.265 [2024-11-20 12:45:08.855040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.855988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.856846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.857983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.858967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.266 [2024-11-20 12:45:08.859579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.859987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.860750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.860808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.860854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.860892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.860929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.860968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.861989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.862985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.863975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.267 [2024-11-20 12:45:08.864567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.864959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.865974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.866962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.867982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.268 [2024-11-20 12:45:08.868670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.868963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.869967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.870986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.871976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.872020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.872050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.269 [2024-11-20 12:45:08.872088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.872533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.873976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.874980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.875684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.270 [2024-11-20 12:45:08.876882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.876924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.876976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.877963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.878963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.879966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.271 [2024-11-20 12:45:08.880718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.880757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.880804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.880840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.880881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.880922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.880963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.881987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.882947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.883997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.884037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.884076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.272 [2024-11-20 12:45:08.884122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.884970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.885964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.886963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.887976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.888017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.888056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.888097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.888136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.273 [2024-11-20 12:45:08.888174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.888993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.889982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.890969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.891975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.274 [2024-11-20 12:45:08.892292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.892990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.893969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.894510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.275 [2024-11-20 12:45:08.895540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.895966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.896968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.897722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.898983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.276 [2024-11-20 12:45:08.899737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.899776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.899822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.899862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.899899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.899938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.899969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.900907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.277 [2024-11-20 12:45:08.901748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.901798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.901843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.901886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.901927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.901973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.902986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.277 [2024-11-20 12:45:08.903907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.903946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.903984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.904991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.905969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.906968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.907974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.278 [2024-11-20 12:45:08.908016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.908991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.909983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.910542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.911979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.279 [2024-11-20 12:45:08.912347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.912998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.913906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.914979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.280 [2024-11-20 12:45:08.915432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.915970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.916802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.917968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.918990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.919030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.281 [2024-11-20 12:45:08.919069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.919938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.920980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.921996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.922991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.923025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.923067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.923106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.923147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.923193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.282 [2024-11-20 12:45:08.923237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.923969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.924973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.925962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.926981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.283 [2024-11-20 12:45:08.927526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.927983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.928988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.929640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.930966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.284 [2024-11-20 12:45:08.931643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.931960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.932989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.933986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.934983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.285 [2024-11-20 12:45:08.935583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.935984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.936988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.937586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.286 [2024-11-20 12:45:08.938894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.938939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.938983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.939999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.940955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.941991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.287 [2024-11-20 12:45:08.942722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.942765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.942803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.942845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.942882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.943992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.944969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.945983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.946993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.947042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.947086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.947133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.947175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.288 [2024-11-20 12:45:08.947217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.947977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.948974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.949986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.289 [2024-11-20 12:45:08.950207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.950989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.289 [2024-11-20 12:45:08.951033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.951989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.952327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.953990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.954954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.290 [2024-11-20 12:45:08.955308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.955833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.956998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.957961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.291 [2024-11-20 12:45:08.958326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.958742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.959985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.960962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.961975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.292 [2024-11-20 12:45:08.962840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.962885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.962931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.962975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.963989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.964995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.965033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.965074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.965108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.965149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.965916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.965970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.966999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.293 [2024-11-20 12:45:08.967643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.967980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.968952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.969960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.970978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.971495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.294 [2024-11-20 12:45:08.972674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.972723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.972774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.972830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.972881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.972928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.972975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.973983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.974985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.975975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.295 [2024-11-20 12:45:08.976484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.976983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.977943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.978983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.979024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.979074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.979126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.296 [2024-11-20 12:45:08.979172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.979973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.980965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.981991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.982973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.983981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.577 [2024-11-20 12:45:08.984284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.984962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.985964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.986992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.987501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.988999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.989978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.578 [2024-11-20 12:45:08.990969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.991983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.992989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.993755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.994979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.995960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.996976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.579 [2024-11-20 12:45:08.997766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.997813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.997858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.997904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.997955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.998964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 true 00:31:03.580 [2024-11-20 12:45:08.999830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:08.999974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.000987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.580 [2024-11-20 12:45:09.001606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.001974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.002985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.003033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.580 [2024-11-20 12:45:09.003070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.003994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.004040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.004800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.004851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.004896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.004934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.004978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.005996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.006969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.007984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.008977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.581 [2024-11-20 12:45:09.009017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.009966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.010364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.011969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.012968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.013799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.582 [2024-11-20 12:45:09.014987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.015978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.016666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.017959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.018965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.019962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.020969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.583 [2024-11-20 12:45:09.021275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.021956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.022989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.023026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.023064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.023100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:03.584 [2024-11-20 12:45:09.023854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.023901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.023949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.023989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.584 [2024-11-20 12:45:09.024271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.024967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.025999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.026047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.026092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.584 [2024-11-20 12:45:09.026136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.026980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.027966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.028974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.029332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.030975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.585 [2024-11-20 12:45:09.031979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.032990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.033999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.034971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.586 [2024-11-20 12:45:09.035713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.035764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.035810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.035857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.035906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.035957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.587 [2024-11-20 12:45:09.036956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.036994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.037958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.038004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.038050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.038100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.588 [2024-11-20 12:45:09.038146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.038786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.039955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.040003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.040049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.040093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.589 [2024-11-20 12:45:09.040138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.040984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.590 [2024-11-20 12:45:09.041244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.041993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.042040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.042086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.042132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.042177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.042352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.591 [2024-11-20 12:45:09.042399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.042959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.043005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.592 [2024-11-20 12:45:09.043049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.593 [2024-11-20 12:45:09.043772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.043810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.043851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.043891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.043929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.043976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.044688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.045142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.045194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.045240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.594 [2024-11-20 12:45:09.045286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.045976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.595 [2024-11-20 12:45:09.046582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.046995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.596 [2024-11-20 12:45:09.047827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.047994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.048972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.597 [2024-11-20 12:45:09.049832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.049878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.049920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.049966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.050973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.598 [2024-11-20 12:45:09.051277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.599 [2024-11-20 12:45:09.051770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.051998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.052972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.053968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.599 [2024-11-20 12:45:09.054936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.054986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.055978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.056979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.057448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.058982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.059983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.600 [2024-11-20 12:45:09.060810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.060854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.061971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.062962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.063631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.064991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.601 [2024-11-20 12:45:09.065765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.065805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.065835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.065873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.065911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.065951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.065992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.066981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.067972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.068978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.069830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.070587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.602 [2024-11-20 12:45:09.070638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.070962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.071948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.072997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.073993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.074973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.603 [2024-11-20 12:45:09.075006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.075998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.076967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.077985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.078972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.079958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.604 [2024-11-20 12:45:09.080602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.080976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.081991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.082366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.083955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.084993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.605 [2024-11-20 12:45:09.085262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.085975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.086981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.087978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.606 [2024-11-20 12:45:09.088824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.088869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.088911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.088951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.088998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.089992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.090994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.091744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.092977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.093999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.607 [2024-11-20 12:45:09.094996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.095864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.096995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.097979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.098955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 Message suppressed 999 times: [2024-11-20 12:45:09.099497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 Read completed with error (sct=0, sc=15) 00:31:03.608 [2024-11-20 12:45:09.099542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.099978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.100958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.101976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.102982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.103994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.104034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.104072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.104114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.104152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.104189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.608 [2024-11-20 12:45:09.104226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.104992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.105988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.106987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.107977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.108984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.109987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.110979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.111980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.609 [2024-11-20 12:45:09.112569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.112948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.113959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.114969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.115988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.116546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.117971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.118973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.119720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.120973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.610 [2024-11-20 12:45:09.121572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.121999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.122894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.123967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.124966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.125992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.126981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.127996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.128972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.129295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.611 [2024-11-20 12:45:09.130629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.130982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.131950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.132805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.133994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.134972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.135967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.136999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.137991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.138934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.139979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.140025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.140071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.140116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.612 [2024-11-20 12:45:09.140162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.140990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.141984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.142986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.143983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.144977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.145298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.146967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.147986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.148980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.149020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.149060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.149102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.149153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.613 [2024-11-20 12:45:09.149200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.614 [2024-11-20 12:45:09.149296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.149970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.150969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.151675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.152980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.153979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.154991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.155949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.156968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.157970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.158015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.614 [2024-11-20 12:45:09.158059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.158979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.159964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.160983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.161994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.162999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.163996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.164998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.165609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.166957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.615 [2024-11-20 12:45:09.167851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.167891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.167926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.167967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.168990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.169960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.170988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.171740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.172997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.173978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.174979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.175979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.176020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.176068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.176116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.176164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.616 [2024-11-20 12:45:09.176618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.176959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.177977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.178980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.179971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.180972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.181972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.182008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.182044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.182091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.182898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.182946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.183980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.184984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.617 [2024-11-20 12:45:09.185588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.185988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.186998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.187960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 [2024-11-20 12:45:09.188474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.618 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.901 [2024-11-20 12:45:09.394942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.901 [2024-11-20 12:45:09.395704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.395971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.396988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.397997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.398969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.902 [2024-11-20 12:45:09.399743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.399797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.399837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.399879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.400972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.401978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.402893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.403988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.903 [2024-11-20 12:45:09.404894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.404938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.404974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.405994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.904 [2024-11-20 12:45:09.406821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.406996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.407980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.408887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.409392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.904 [2024-11-20 12:45:09.409441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.409954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.410994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.411986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.412451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.413973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.905 [2024-11-20 12:45:09.414331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.414967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.415966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.416959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.417988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.906 [2024-11-20 12:45:09.418278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.418999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.419972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.420987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.421760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.422955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.423007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.907 [2024-11-20 12:45:09.423051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.423976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.424924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.425467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.426980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.908 [2024-11-20 12:45:09.427883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.427922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.427961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.428959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.429988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.430980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.431964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.909 [2024-11-20 12:45:09.432706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.432969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.433988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.434905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:03.910 [2024-11-20 12:45:09.435790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.435989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:03.910 [2024-11-20 12:45:09.436146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.910 [2024-11-20 12:45:09.436920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.436962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.436998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.437973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.438987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.439995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.440651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.911 [2024-11-20 12:45:09.441755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.441792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.441835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.441867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.441903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.441945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.441986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.442970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.443959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.444968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.445997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.912 [2024-11-20 12:45:09.446663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.446976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.447841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.448967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.449997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.450946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.913 [2024-11-20 12:45:09.451458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.451987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.452994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.453963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.454991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.455988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.914 [2024-11-20 12:45:09.456395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.456975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.915 [2024-11-20 12:45:09.457362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.457975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.458994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.459609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.915 [2024-11-20 12:45:09.460684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.460727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.460769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.460813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.460867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.460915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.460959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.461985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.462987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.463990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.464993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.916 [2024-11-20 12:45:09.465507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.465995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.466994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.467977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.468967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.469966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.917 [2024-11-20 12:45:09.470425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.470986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.471966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.472781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.473999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.474979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.475023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.475061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.475099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.475147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.475188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.918 [2024-11-20 12:45:09.475231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.475965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.476963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.477986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.478973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.919 [2024-11-20 12:45:09.479536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.479987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.480993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.481969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.482999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.483960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.920 [2024-11-20 12:45:09.484466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.484972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.485966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.486992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.487967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.488967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.489008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.489054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.921 [2024-11-20 12:45:09.489221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.489990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.490979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.491571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.492997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.922 [2024-11-20 12:45:09.493883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.493920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.493959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.494981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.495999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.496967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.497991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.923 [2024-11-20 12:45:09.498842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.498884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.498923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.498962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.498998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.499717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.500959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.501964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.502833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.503002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.503051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.924 [2024-11-20 12:45:09.503093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.503981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.504535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.925 [2024-11-20 12:45:09.505218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.505990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.506992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.925 [2024-11-20 12:45:09.507770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.507817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.507992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.508997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.509966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.510965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.511978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.926 [2024-11-20 12:45:09.512674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.512973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.513983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.514987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.515961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.516990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.927 [2024-11-20 12:45:09.517628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.517965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.518553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.519967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.520988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.928 [2024-11-20 12:45:09.521940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.521971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.522964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.523551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.524986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.525971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.929 [2024-11-20 12:45:09.526599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.526998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.527910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.528952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.529988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.530972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.930 [2024-11-20 12:45:09.531710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.531981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.532925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.533967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.534984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.535995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.931 [2024-11-20 12:45:09.536574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.536992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.537574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.538969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.539956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.540989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.932 [2024-11-20 12:45:09.541458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.541974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.542470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.543998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.544995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.933 [2024-11-20 12:45:09.545591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.545627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.545656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.545697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.545741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.545787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.545966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.546991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.547964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.548981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.549961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.934 [2024-11-20 12:45:09.550788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.550818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.550856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.550895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.550940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.550981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.551977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.552989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.553965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.554969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.935 [2024-11-20 12:45:09.555332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:31:03.936 [2024-11-20 12:45:09.555687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.555960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.556613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.557963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.558982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.559929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.936 [2024-11-20 12:45:09.560335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.560968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.561658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.562968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.563952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.937 [2024-11-20 12:45:09.564408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.564962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.565954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.566962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.567995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.568985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.938 [2024-11-20 12:45:09.569474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.569982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.570991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.571964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.572970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.573976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.574020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.574073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.574119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.939 [2024-11-20 12:45:09.574166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.574976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.575461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.576978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.577959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.578840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.579012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.579058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.579097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.579139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.579179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.940 [2024-11-20 12:45:09.579219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.579971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.580607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.581970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.582994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.583978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.941 [2024-11-20 12:45:09.584023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.584967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.585995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.586963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.587975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.942 [2024-11-20 12:45:09.588822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.588860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.588903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.588948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.588992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.589958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.590995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 [2024-11-20 12:45:09.591812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:31:03.943 true 00:31:03.943 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:03.943 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.137 12:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:05.137 12:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:05.137 12:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:05.395 true 00:31:05.395 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:05.395 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.653 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.653 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:05.653 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:05.912 true 00:31:05.912 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:05.912 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.288 12:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.288 12:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:07.288 12:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:07.547 true 00:31:07.547 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:07.547 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.484 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.484 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:08.484 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:08.744 true 00:31:08.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:08.744 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.015 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.015 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:09.015 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:09.273 true 00:31:09.273 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:09.273 12:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.656 12:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.656 12:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:10.656 12:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:10.913 true 00:31:10.913 12:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:10.913 12:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.848 12:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.848 12:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:11.848 12:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:12.106 true 00:31:12.106 12:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:12.106 12:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.365 12:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.365 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:12.365 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:12.623 true 00:31:12.623 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:12.623 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.001 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.001 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:14.001 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:14.001 true 00:31:14.001 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:14.001 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.938 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.196 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:15.196 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:15.196 true 00:31:15.196 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:15.196 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.455 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.713 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:15.713 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:15.972 true 00:31:15.972 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:15.972 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.348 12:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.348 12:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:17.348 12:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:17.348 true 00:31:17.348 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:17.348 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.283 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.542 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:18.542 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:18.542 true 00:31:18.542 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:18.800 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.800 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.058 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:19.058 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:19.317 true 00:31:19.317 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:19.317 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.695 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.695 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:20.695 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:20.695 true 00:31:20.954 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:20.954 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.779 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.779 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:21.779 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:22.037 true 00:31:22.037 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:22.037 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.295 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.554 12:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:22.554 12:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:22.554 true 00:31:22.554 12:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:22.554 12:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.930 Initializing NVMe Controllers 00:31:23.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.930 Controller IO queue size 128, less than required. 00:31:23.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.930 Controller IO queue size 128, less than required. 00:31:23.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:23.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:23.930 Initialization complete. Launching workers. 00:31:23.930 ======================================================== 00:31:23.930 Latency(us) 00:31:23.930 Device Information : IOPS MiB/s Average min max 00:31:23.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2734.67 1.34 31845.83 1004.79 1031885.79 00:31:23.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18805.90 9.18 6790.98 1471.13 273764.23 00:31:23.930 ======================================================== 00:31:23.930 Total : 21540.57 10.52 9971.80 1004.79 1031885.79 00:31:23.930 00:31:23.930 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.930 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:23.930 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:24.189 true 00:31:24.189 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1116394 00:31:24.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1116394) - No such process 00:31:24.189 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1116394 00:31:24.189 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.448 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.448 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:24.448 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:24.448 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:24.448 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:24.449 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:24.707 null0 00:31:24.707 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:24.707 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:24.707 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:24.966 null1 00:31:24.966 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:24.966 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:24.966 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:24.966 null2 00:31:24.966 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:24.966 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:24.966 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:25.225 null3 00:31:25.225 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.225 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.225 12:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:25.484 null4 00:31:25.484 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.484 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.484 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:25.484 null5 00:31:25.484 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.484 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.484 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:25.743 null6 00:31:25.743 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.743 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.743 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:26.004 null7 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1122811 1122812 1122814 1122816 1122818 1122820 1122822 1122823 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.004 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.005 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.005 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.005 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.005 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.264 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.265 12:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.524 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.783 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.041 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.042 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.301 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.561 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.820 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.821 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.080 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.339 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.340 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.599 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.858 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.858 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.858 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.858 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.859 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.156 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.520 12:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.520 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.521 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.780 rmmod nvme_tcp 00:31:29.780 rmmod nvme_fabrics 00:31:29.780 rmmod nvme_keyring 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1115889 ']' 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1115889 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1115889 ']' 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1115889 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.780 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115889 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115889' 00:31:30.039 killing process with pid 1115889 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1115889 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1115889 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.039 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.573 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:32.574 00:31:32.574 real 0m48.182s 00:31:32.574 user 2m56.887s 00:31:32.574 sys 0m19.235s 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:32.574 ************************************ 00:31:32.574 END TEST nvmf_ns_hotplug_stress 00:31:32.574 ************************************ 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:32.574 ************************************ 00:31:32.574 START TEST nvmf_delete_subsystem 00:31:32.574 ************************************ 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:32.574 * Looking for test storage... 00:31:32.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:32.574 12:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.574 --rc genhtml_branch_coverage=1 00:31:32.574 --rc genhtml_function_coverage=1 00:31:32.574 --rc genhtml_legend=1 00:31:32.574 --rc geninfo_all_blocks=1 00:31:32.574 --rc geninfo_unexecuted_blocks=1 00:31:32.574 00:31:32.574 ' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.574 --rc genhtml_branch_coverage=1 00:31:32.574 --rc genhtml_function_coverage=1 00:31:32.574 --rc genhtml_legend=1 00:31:32.574 --rc geninfo_all_blocks=1 00:31:32.574 --rc geninfo_unexecuted_blocks=1 00:31:32.574 00:31:32.574 ' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.574 --rc genhtml_branch_coverage=1 00:31:32.574 --rc genhtml_function_coverage=1 00:31:32.574 --rc genhtml_legend=1 00:31:32.574 --rc geninfo_all_blocks=1 00:31:32.574 --rc geninfo_unexecuted_blocks=1 00:31:32.574 00:31:32.574 ' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.574 --rc genhtml_branch_coverage=1 00:31:32.574 --rc genhtml_function_coverage=1 00:31:32.574 --rc genhtml_legend=1 00:31:32.574 --rc geninfo_all_blocks=1 00:31:32.574 --rc geninfo_unexecuted_blocks=1 00:31:32.574 00:31:32.574 ' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.574 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.575 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.575 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:32.575 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:32.575 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.575 12:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:31:39.144 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:31:39.144 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.144 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:31:39.145 Found net devices under 0000:1a:00.0: cvl_0_0 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:31:39.145 Found net devices under 0000:1a:00.1: cvl_0_1 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.145 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:31:39.145 00:31:39.145 --- 10.0.0.2 ping statistics --- 00:31:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.145 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:31:39.145 00:31:39.145 --- 10.0.0.1 ping statistics --- 00:31:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.145 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1127274 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1127274 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1127274 ']' 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.145 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.145 [2024-11-20 12:45:44.108758] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.145 [2024-11-20 12:45:44.109596] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:31:39.145 [2024-11-20 12:45:44.109627] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.145 [2024-11-20 12:45:44.187734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:39.145 [2024-11-20 12:45:44.225652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.145 [2024-11-20 12:45:44.225686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.145 [2024-11-20 12:45:44.225693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.145 [2024-11-20 12:45:44.225699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.145 [2024-11-20 12:45:44.225704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.145 [2024-11-20 12:45:44.226913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.145 [2024-11-20 12:45:44.226915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.145 [2024-11-20 12:45:44.292149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.145 [2024-11-20 12:45:44.292674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:39.146 [2024-11-20 12:45:44.292893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 [2024-11-20 12:45:44.967797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 [2024-11-20 12:45:44.996051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.405 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 NULL1 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 Delay0 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1127550 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:39.405 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:39.405 [2024-11-20 12:45:45.112017] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:41.309 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:41.309 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.309 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 [2024-11-20 12:45:47.307755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c024a0 is same with the state(6) to be set 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 Read completed with error (sct=0, sc=8) 00:31:41.569 starting I/O failed: -6 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.569 Write completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 starting I/O failed: -6 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 starting I/O failed: -6 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 starting I/O failed: -6 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 starting I/O failed: -6 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 [2024-11-20 12:45:47.311475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd53400d350 is same with the state(6) to be set 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:41.570 Write completed with error (sct=0, sc=8) 00:31:41.570 Read completed with error (sct=0, sc=8) 00:31:42.948 [2024-11-20 12:45:48.289543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c039a0 is same with the state(6) to be set 00:31:42.948 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 [2024-11-20 12:45:48.311013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c02680 is same with the state(6) to be set 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 [2024-11-20 12:45:48.311321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c02860 is same with the state(6) to be set 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 [2024-11-20 12:45:48.313841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd534000c40 is same with the state(6) to be set 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Read completed with error (sct=0, sc=8) 00:31:42.949 Write completed with error (sct=0, sc=8) 00:31:42.949 [2024-11-20 12:45:48.314030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd53400d680 is same with the state(6) to be set 00:31:42.949 Initializing NVMe Controllers 00:31:42.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.949 Controller IO queue size 128, less than required. 00:31:42.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:42.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:42.949 Initialization complete. Launching workers. 00:31:42.949 ======================================================== 00:31:42.949 Latency(us) 00:31:42.949 Device Information : IOPS MiB/s Average min max 00:31:42.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.39 0.08 910997.19 246.39 1005568.34 00:31:42.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.37 0.08 905063.20 244.17 1009838.95 00:31:42.949 ======================================================== 00:31:42.949 Total : 328.76 0.16 907994.23 244.17 1009838.95 00:31:42.949 00:31:42.949 [2024-11-20 12:45:48.314762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c039a0 (9): Bad file descriptor 00:31:42.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:42.949 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.949 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:42.949 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1127550 00:31:42.949 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1127550 00:31:43.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1127550) - No such process 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1127550 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1127550 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1127550 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.209 [2024-11-20 12:45:48.843930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1128090 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:43.209 12:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.209 [2024-11-20 12:45:48.926529] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:43.776 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.776 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:43.776 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:44.344 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:44.344 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:44.344 12:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:44.910 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:44.910 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:44.910 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:45.169 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.169 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:45.169 12:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:45.736 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.736 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:45.736 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:46.304 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:46.304 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:46.304 12:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:46.304 Initializing NVMe Controllers 00:31:46.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.304 Controller IO queue size 128, less than required. 00:31:46.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:46.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:46.304 Initialization complete. Launching workers. 00:31:46.304 ======================================================== 00:31:46.304 Latency(us) 00:31:46.304 Device Information : IOPS MiB/s Average min max 00:31:46.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002433.07 1000123.64 1041220.58 00:31:46.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003976.94 1000302.20 1009986.76 00:31:46.304 ======================================================== 00:31:46.304 Total : 256.00 0.12 1003205.00 1000123.64 1041220.58 00:31:46.304 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1128090 00:31:46.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1128090) - No such process 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1128090 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.872 rmmod nvme_tcp 00:31:46.872 rmmod nvme_fabrics 00:31:46.872 rmmod nvme_keyring 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1127274 ']' 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1127274 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1127274 ']' 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1127274 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127274 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127274' 00:31:46.872 killing process with pid 1127274 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1127274 00:31:46.872 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1127274 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.131 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.036 00:31:49.036 real 0m16.886s 00:31:49.036 user 0m26.324s 00:31:49.036 sys 0m6.248s 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.036 ************************************ 00:31:49.036 END TEST nvmf_delete_subsystem 00:31:49.036 ************************************ 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.036 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.296 ************************************ 00:31:49.296 START TEST nvmf_host_management 00:31:49.296 ************************************ 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:49.296 * Looking for test storage... 00:31:49.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.296 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.297 --rc genhtml_branch_coverage=1 00:31:49.297 --rc genhtml_function_coverage=1 00:31:49.297 --rc genhtml_legend=1 00:31:49.297 --rc geninfo_all_blocks=1 00:31:49.297 --rc geninfo_unexecuted_blocks=1 00:31:49.297 00:31:49.297 ' 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.297 --rc genhtml_branch_coverage=1 00:31:49.297 --rc genhtml_function_coverage=1 00:31:49.297 --rc genhtml_legend=1 00:31:49.297 --rc geninfo_all_blocks=1 00:31:49.297 --rc geninfo_unexecuted_blocks=1 00:31:49.297 00:31:49.297 ' 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.297 --rc genhtml_branch_coverage=1 00:31:49.297 --rc genhtml_function_coverage=1 00:31:49.297 --rc genhtml_legend=1 00:31:49.297 --rc geninfo_all_blocks=1 00:31:49.297 --rc geninfo_unexecuted_blocks=1 00:31:49.297 00:31:49.297 ' 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.297 --rc genhtml_branch_coverage=1 00:31:49.297 --rc genhtml_function_coverage=1 00:31:49.297 --rc genhtml_legend=1 00:31:49.297 --rc geninfo_all_blocks=1 00:31:49.297 --rc geninfo_unexecuted_blocks=1 00:31:49.297 00:31:49.297 ' 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.297 12:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.297 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.298 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:31:55.870 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:31:55.870 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:31:55.870 Found net devices under 0000:1a:00.0: cvl_0_0 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.870 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:31:55.871 Found net devices under 0000:1a:00.1: cvl_0_1 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.871 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:31:55.871 00:31:55.871 --- 10.0.0.2 ping statistics --- 00:31:55.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.871 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:31:55.871 00:31:55.871 --- 10.0.0.1 ping statistics --- 00:31:55.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.871 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1132389 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1132389 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1132389 ']' 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.871 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.871 [2024-11-20 12:46:01.199169] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.871 [2024-11-20 12:46:01.200099] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:31:55.871 [2024-11-20 12:46:01.200134] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.871 [2024-11-20 12:46:01.279264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:55.871 [2024-11-20 12:46:01.318693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.871 [2024-11-20 12:46:01.318730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.871 [2024-11-20 12:46:01.318737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.871 [2024-11-20 12:46:01.318742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.871 [2024-11-20 12:46:01.318746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.871 [2024-11-20 12:46:01.320464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.871 [2024-11-20 12:46:01.320578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.871 [2024-11-20 12:46:01.320663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.871 [2024-11-20 12:46:01.320664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:55.871 [2024-11-20 12:46:01.385968] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.871 [2024-11-20 12:46:01.386804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.871 [2024-11-20 12:46:01.386962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:55.871 [2024-11-20 12:46:01.387382] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.871 [2024-11-20 12:46:01.387420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 [2024-11-20 12:46:02.061553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 Malloc0 00:31:56.440 [2024-11-20 12:46:02.149695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.440 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1132691 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1132691 /var/tmp/bdevperf.sock 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1132691 ']' 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.700 { 00:31:56.700 "params": { 00:31:56.700 "name": "Nvme$subsystem", 00:31:56.700 "trtype": "$TEST_TRANSPORT", 00:31:56.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.700 "adrfam": "ipv4", 00:31:56.700 "trsvcid": "$NVMF_PORT", 00:31:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.700 "hdgst": ${hdgst:-false}, 00:31:56.700 "ddgst": ${ddgst:-false} 00:31:56.700 }, 00:31:56.700 "method": "bdev_nvme_attach_controller" 00:31:56.700 } 00:31:56.700 EOF 00:31:56.700 )") 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:56.700 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.700 "params": { 00:31:56.700 "name": "Nvme0", 00:31:56.700 "trtype": "tcp", 00:31:56.700 "traddr": "10.0.0.2", 00:31:56.700 "adrfam": "ipv4", 00:31:56.700 "trsvcid": "4420", 00:31:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.700 "hdgst": false, 00:31:56.700 "ddgst": false 00:31:56.700 }, 00:31:56.700 "method": "bdev_nvme_attach_controller" 00:31:56.700 }' 00:31:56.700 [2024-11-20 12:46:02.243614] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:31:56.700 [2024-11-20 12:46:02.243657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132691 ] 00:31:56.700 [2024-11-20 12:46:02.316827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.700 [2024-11-20 12:46:02.354823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.959 Running I/O for 10 seconds... 00:31:57.529 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.529 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:57.529 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.530 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.530 [2024-11-20 12:46:03.137345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.530 [2024-11-20 12:46:03.137733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.531 [2024-11-20 12:46:03.137739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.531 [2024-11-20 12:46:03.137745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4ce0 is same with the state(6) to be set 00:31:57.531 [2024-11-20 12:46:03.137847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.137988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.137996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.531 [2024-11-20 12:46:03.138399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.531 [2024-11-20 12:46:03.138405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.532 [2024-11-20 12:46:03.138744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.138752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:31:57.532 [2024-11-20 12:46:03.139664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:57.532 task offset: 0 on job bdev=Nvme0n1 fails 00:31:57.532 00:31:57.532 Latency(us) 00:31:57.532 [2024-11-20T11:46:03.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:57.532 Job: Nvme0n1 ended in about 0.47 seconds with error 00:31:57.532 Verification LBA range: start 0x0 length 0x400 00:31:57.532 Nvme0n1 : 0.47 2175.10 135.94 135.94 0.00 27062.99 3410.85 24188.74 00:31:57.532 [2024-11-20T11:46:03.296Z] =================================================================================================================== 00:31:57.532 [2024-11-20T11:46:03.296Z] Total : 2175.10 135.94 135.94 0.00 27062.99 3410.85 24188.74 00:31:57.532 [2024-11-20 12:46:03.141271] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.532 [2024-11-20 12:46:03.141287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x882940 (9): Bad file descriptor 00:31:57.532 [2024-11-20 12:46:03.142204] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not al 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.532 low host 'nqn.2016-06.io.spdk:host0' 00:31:57.532 [2024-11-20 12:46:03.142279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:57.532 [2024-11-20 12:46:03.142298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.532 [2024-11-20 12:46:03.142309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:57.532 [2024-11-20 12:46:03.142317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:57.532 [2024-11-20 12:46:03.142323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.532 [2024-11-20 12:46:03.142329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x882940 00:31:57.532 [2024-11-20 12:46:03.142346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x882940 (9): Bad file descriptor 00:31:57.532 [2024-11-20 12:46:03.142356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:57.532 [2024-11-20 12:46:03.142362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:57.532 [2024-11-20 12:46:03.142371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:57.532 [2024-11-20 12:46:03.142378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:57.532 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:57.532 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.532 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.532 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.532 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1132691 00:31:58.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1132691) - No such process 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:58.470 { 00:31:58.470 "params": { 00:31:58.470 "name": "Nvme$subsystem", 00:31:58.470 "trtype": "$TEST_TRANSPORT", 00:31:58.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.470 "adrfam": "ipv4", 00:31:58.470 "trsvcid": "$NVMF_PORT", 00:31:58.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.470 "hdgst": ${hdgst:-false}, 00:31:58.470 "ddgst": ${ddgst:-false} 00:31:58.470 }, 00:31:58.470 "method": "bdev_nvme_attach_controller" 00:31:58.470 } 00:31:58.470 EOF 00:31:58.470 )") 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:58.470 12:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:58.470 "params": { 00:31:58.470 "name": "Nvme0", 00:31:58.470 "trtype": "tcp", 00:31:58.470 "traddr": "10.0.0.2", 00:31:58.470 "adrfam": "ipv4", 00:31:58.470 "trsvcid": "4420", 00:31:58.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.470 "hdgst": false, 00:31:58.470 "ddgst": false 00:31:58.470 }, 00:31:58.470 "method": "bdev_nvme_attach_controller" 00:31:58.470 }' 00:31:58.470 [2024-11-20 12:46:04.209381] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:31:58.470 [2024-11-20 12:46:04.209437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132973 ] 00:31:58.730 [2024-11-20 12:46:04.283083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.730 [2024-11-20 12:46:04.321038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.989 Running I/O for 1 seconds... 00:31:59.926 2268.00 IOPS, 141.75 MiB/s 00:31:59.926 Latency(us) 00:31:59.926 [2024-11-20T11:46:05.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.926 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:59.926 Verification LBA range: start 0x0 length 0x400 00:31:59.926 Nvme0n1 : 1.01 2311.55 144.47 0.00 0.00 27187.48 1563.93 24069.59 00:31:59.926 [2024-11-20T11:46:05.690Z] =================================================================================================================== 00:31:59.926 [2024-11-20T11:46:05.690Z] Total : 2311.55 144.47 0.00 0.00 27187.48 1563.93 24069.59 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:00.185 rmmod nvme_tcp 00:32:00.185 rmmod nvme_fabrics 00:32:00.185 rmmod nvme_keyring 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1132389 ']' 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1132389 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1132389 ']' 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1132389 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1132389 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1132389' 00:32:00.185 killing process with pid 1132389 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1132389 00:32:00.185 12:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1132389 00:32:00.445 [2024-11-20 12:46:06.080924] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.445 12:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:02.982 00:32:02.982 real 0m13.374s 00:32:02.982 user 0m19.266s 00:32:02.982 sys 0m6.493s 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:02.982 ************************************ 00:32:02.982 END TEST nvmf_host_management 00:32:02.982 ************************************ 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:02.982 ************************************ 00:32:02.982 START TEST nvmf_lvol 00:32:02.982 ************************************ 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:02.982 * Looking for test storage... 00:32:02.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.982 --rc genhtml_branch_coverage=1 00:32:02.982 --rc genhtml_function_coverage=1 00:32:02.982 --rc genhtml_legend=1 00:32:02.982 --rc geninfo_all_blocks=1 00:32:02.982 --rc geninfo_unexecuted_blocks=1 00:32:02.982 00:32:02.982 ' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.982 --rc genhtml_branch_coverage=1 00:32:02.982 --rc genhtml_function_coverage=1 00:32:02.982 --rc genhtml_legend=1 00:32:02.982 --rc geninfo_all_blocks=1 00:32:02.982 --rc geninfo_unexecuted_blocks=1 00:32:02.982 00:32:02.982 ' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.982 --rc genhtml_branch_coverage=1 00:32:02.982 --rc genhtml_function_coverage=1 00:32:02.982 --rc genhtml_legend=1 00:32:02.982 --rc geninfo_all_blocks=1 00:32:02.982 --rc geninfo_unexecuted_blocks=1 00:32:02.982 00:32:02.982 ' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.982 --rc genhtml_branch_coverage=1 00:32:02.982 --rc genhtml_function_coverage=1 00:32:02.982 --rc genhtml_legend=1 00:32:02.982 --rc geninfo_all_blocks=1 00:32:02.982 --rc geninfo_unexecuted_blocks=1 00:32:02.982 00:32:02.982 ' 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.982 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.983 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:32:09.555 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:32:09.555 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.555 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:32:09.556 Found net devices under 0000:1a:00.0: cvl_0_0 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:32:09.556 Found net devices under 0000:1a:00.1: cvl_0_1 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:32:09.556 00:32:09.556 --- 10.0.0.2 ping statistics --- 00:32:09.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.556 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:32:09.556 00:32:09.556 --- 10.0.0.1 ping statistics --- 00:32:09.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.556 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1137001 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1137001 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1137001 ']' 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.556 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 [2024-11-20 12:46:14.645000] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.556 [2024-11-20 12:46:14.645845] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:32:09.556 [2024-11-20 12:46:14.645877] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.556 [2024-11-20 12:46:14.718945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.556 [2024-11-20 12:46:14.757534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.556 [2024-11-20 12:46:14.757569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.556 [2024-11-20 12:46:14.757575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.556 [2024-11-20 12:46:14.757581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.556 [2024-11-20 12:46:14.757586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.556 [2024-11-20 12:46:14.758929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.556 [2024-11-20 12:46:14.759039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.556 [2024-11-20 12:46:14.759040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.556 [2024-11-20 12:46:14.824590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.556 [2024-11-20 12:46:14.825383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:09.556 [2024-11-20 12:46:14.825627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.556 [2024-11-20 12:46:14.825791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.816 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:10.075 [2024-11-20 12:46:15.651780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.075 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.334 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:10.334 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.593 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:10.594 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:10.594 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:10.852 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7e520cea-b0c7-433d-b2e2-c9ab66958979 00:32:10.852 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e520cea-b0c7-433d-b2e2-c9ab66958979 lvol 20 00:32:11.111 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f3d0ee25-3d7a-448c-88e5-1c1aaff613e9 00:32:11.111 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:11.111 12:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3d0ee25-3d7a-448c-88e5-1c1aaff613e9 00:32:11.370 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.629 [2024-11-20 12:46:17.191658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.629 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.887 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1137560 00:32:11.887 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:11.887 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:12.824 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f3d0ee25-3d7a-448c-88e5-1c1aaff613e9 MY_SNAPSHOT 00:32:13.083 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b41b0078-a89c-4e85-8afb-9f0b91c7c9a9 00:32:13.083 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f3d0ee25-3d7a-448c-88e5-1c1aaff613e9 30 00:32:13.083 12:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b41b0078-a89c-4e85-8afb-9f0b91c7c9a9 MY_CLONE 00:32:13.341 12:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a75fbef4-eaac-443b-8fa6-afa38e00c9d8 00:32:13.341 12:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a75fbef4-eaac-443b-8fa6-afa38e00c9d8 00:32:13.907 12:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1137560 00:32:22.025 Initializing NVMe Controllers 00:32:22.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:22.025 Controller IO queue size 128, less than required. 00:32:22.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:22.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:22.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:22.025 Initialization complete. Launching workers. 00:32:22.025 ======================================================== 00:32:22.025 Latency(us) 00:32:22.025 Device Information : IOPS MiB/s Average min max 00:32:22.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12915.90 50.45 9915.48 1549.82 62201.56 00:32:22.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13157.20 51.40 9730.12 2034.41 54937.85 00:32:22.025 ======================================================== 00:32:22.025 Total : 26073.09 101.85 9821.94 1549.82 62201.56 00:32:22.025 00:32:22.025 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:22.284 12:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3d0ee25-3d7a-448c-88e5-1c1aaff613e9 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e520cea-b0c7-433d-b2e2-c9ab66958979 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.543 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.543 rmmod nvme_tcp 00:32:22.543 rmmod nvme_fabrics 00:32:22.802 rmmod nvme_keyring 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1137001 ']' 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1137001 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1137001 ']' 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1137001 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137001 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.802 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137001' 00:32:22.802 killing process with pid 1137001 00:32:22.803 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1137001 00:32:22.803 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1137001 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.062 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:24.967 00:32:24.967 real 0m22.400s 00:32:24.967 user 0m55.525s 00:32:24.967 sys 0m9.562s 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 ************************************ 00:32:24.967 END TEST nvmf_lvol 00:32:24.967 ************************************ 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.967 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.227 ************************************ 00:32:25.227 START TEST nvmf_lvs_grow 00:32:25.227 ************************************ 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:25.227 * Looking for test storage... 00:32:25.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.227 --rc genhtml_branch_coverage=1 00:32:25.227 --rc genhtml_function_coverage=1 00:32:25.227 --rc genhtml_legend=1 00:32:25.227 --rc geninfo_all_blocks=1 00:32:25.227 --rc geninfo_unexecuted_blocks=1 00:32:25.227 00:32:25.227 ' 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.227 --rc genhtml_branch_coverage=1 00:32:25.227 --rc genhtml_function_coverage=1 00:32:25.227 --rc genhtml_legend=1 00:32:25.227 --rc geninfo_all_blocks=1 00:32:25.227 --rc geninfo_unexecuted_blocks=1 00:32:25.227 00:32:25.227 ' 00:32:25.227 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.227 --rc genhtml_branch_coverage=1 00:32:25.227 --rc genhtml_function_coverage=1 00:32:25.227 --rc genhtml_legend=1 00:32:25.227 --rc geninfo_all_blocks=1 00:32:25.227 --rc geninfo_unexecuted_blocks=1 00:32:25.227 00:32:25.227 ' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.228 --rc genhtml_branch_coverage=1 00:32:25.228 --rc genhtml_function_coverage=1 00:32:25.228 --rc genhtml_legend=1 00:32:25.228 --rc geninfo_all_blocks=1 00:32:25.228 --rc geninfo_unexecuted_blocks=1 00:32:25.228 00:32:25.228 ' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.228 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.798 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:32:31.799 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:32:31.799 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:32:31.799 Found net devices under 0000:1a:00.0: cvl_0_0 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:32:31.799 Found net devices under 0000:1a:00.1: cvl_0_1 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:31.799 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:31.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:32:31.799 00:32:31.799 --- 10.0.0.2 ping statistics --- 00:32:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.799 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:32:31.799 00:32:31.799 --- 10.0.0.1 ping statistics --- 00:32:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.799 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:31.799 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1143162 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1143162 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1143162 ']' 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:31.800 [2024-11-20 12:46:37.156739] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.800 [2024-11-20 12:46:37.157574] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:32:31.800 [2024-11-20 12:46:37.157607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.800 [2024-11-20 12:46:37.213496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.800 [2024-11-20 12:46:37.251393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.800 [2024-11-20 12:46:37.251430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.800 [2024-11-20 12:46:37.251436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.800 [2024-11-20 12:46:37.251442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.800 [2024-11-20 12:46:37.251447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.800 [2024-11-20 12:46:37.251991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.800 [2024-11-20 12:46:37.315875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.800 [2024-11-20 12:46:37.316075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.800 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:31.800 [2024-11-20 12:46:37.536744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.059 ************************************ 00:32:32.059 START TEST lvs_grow_clean 00:32:32.059 ************************************ 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:32.059 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:32.318 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:32.318 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:32.318 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:32.577 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:32.577 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:32.577 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b13ca0a0-2648-4485-8dce-0d14e074055c lvol 150 00:32:32.847 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=41dc5fc6-ef36-40f7-a207-f15b08efb046 00:32:32.847 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.847 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:32.848 [2024-11-20 12:46:38.540361] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:32.848 [2024-11-20 12:46:38.540519] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:32.848 true 00:32:32.848 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:32.848 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:33.122 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:33.122 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:33.407 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 41dc5fc6-ef36-40f7-a207-f15b08efb046 00:32:33.407 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.690 [2024-11-20 12:46:39.228872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1143479 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1143479 /var/tmp/bdevperf.sock 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1143479 ']' 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.690 12:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:33.949 [2024-11-20 12:46:39.458778] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:32:33.949 [2024-11-20 12:46:39.458822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143479 ] 00:32:33.949 [2024-11-20 12:46:39.532611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.949 [2024-11-20 12:46:39.571938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.515 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.515 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:34.515 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:34.774 Nvme0n1 00:32:34.774 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:35.033 [ 00:32:35.033 { 00:32:35.033 "name": "Nvme0n1", 00:32:35.033 "aliases": [ 00:32:35.033 "41dc5fc6-ef36-40f7-a207-f15b08efb046" 00:32:35.033 ], 00:32:35.033 "product_name": "NVMe disk", 00:32:35.033 "block_size": 4096, 00:32:35.033 "num_blocks": 38912, 00:32:35.033 "uuid": "41dc5fc6-ef36-40f7-a207-f15b08efb046", 00:32:35.033 "numa_id": 0, 00:32:35.033 "assigned_rate_limits": { 00:32:35.033 "rw_ios_per_sec": 0, 00:32:35.033 "rw_mbytes_per_sec": 0, 00:32:35.033 "r_mbytes_per_sec": 0, 00:32:35.033 "w_mbytes_per_sec": 0 00:32:35.033 }, 00:32:35.033 "claimed": false, 00:32:35.033 "zoned": false, 00:32:35.033 "supported_io_types": { 00:32:35.033 "read": true, 00:32:35.033 "write": true, 00:32:35.033 "unmap": true, 00:32:35.033 "flush": true, 00:32:35.033 "reset": true, 00:32:35.033 "nvme_admin": true, 00:32:35.033 "nvme_io": true, 00:32:35.034 "nvme_io_md": false, 00:32:35.034 "write_zeroes": true, 00:32:35.034 "zcopy": false, 00:32:35.034 "get_zone_info": false, 00:32:35.034 "zone_management": false, 00:32:35.034 "zone_append": false, 00:32:35.034 "compare": true, 00:32:35.034 "compare_and_write": true, 00:32:35.034 "abort": true, 00:32:35.034 "seek_hole": false, 00:32:35.034 "seek_data": false, 00:32:35.034 "copy": true, 00:32:35.034 "nvme_iov_md": false 00:32:35.034 }, 00:32:35.034 "memory_domains": [ 00:32:35.034 { 00:32:35.034 "dma_device_id": "system", 00:32:35.034 "dma_device_type": 1 00:32:35.034 } 00:32:35.034 ], 00:32:35.034 "driver_specific": { 00:32:35.034 "nvme": [ 00:32:35.034 { 00:32:35.034 "trid": { 00:32:35.034 "trtype": "TCP", 00:32:35.034 "adrfam": "IPv4", 00:32:35.034 "traddr": "10.0.0.2", 00:32:35.034 "trsvcid": "4420", 00:32:35.034 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:35.034 }, 00:32:35.034 "ctrlr_data": { 00:32:35.034 "cntlid": 1, 00:32:35.034 "vendor_id": "0x8086", 00:32:35.034 "model_number": "SPDK bdev Controller", 00:32:35.034 "serial_number": "SPDK0", 00:32:35.034 "firmware_revision": "25.01", 00:32:35.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.034 "oacs": { 00:32:35.034 "security": 0, 00:32:35.034 "format": 0, 00:32:35.034 "firmware": 0, 00:32:35.034 "ns_manage": 0 00:32:35.034 }, 00:32:35.034 "multi_ctrlr": true, 00:32:35.034 "ana_reporting": false 00:32:35.034 }, 00:32:35.034 "vs": { 00:32:35.034 "nvme_version": "1.3" 00:32:35.034 }, 00:32:35.034 "ns_data": { 00:32:35.034 "id": 1, 00:32:35.034 "can_share": true 00:32:35.034 } 00:32:35.034 } 00:32:35.034 ], 00:32:35.034 "mp_policy": "active_passive" 00:32:35.034 } 00:32:35.034 } 00:32:35.034 ] 00:32:35.034 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1143744 00:32:35.034 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:35.034 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.034 Running I/O for 10 seconds... 00:32:36.404 Latency(us) 00:32:36.404 [2024-11-20T11:46:42.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.404 Nvme0n1 : 1.00 24638.00 96.24 0.00 0.00 0.00 0.00 0.00 00:32:36.404 [2024-11-20T11:46:42.168Z] =================================================================================================================== 00:32:36.404 [2024-11-20T11:46:42.168Z] Total : 24638.00 96.24 0.00 0.00 0.00 0.00 0.00 00:32:36.404 00:32:36.971 12:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:37.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.230 Nvme0n1 : 2.00 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:32:37.230 [2024-11-20T11:46:42.994Z] =================================================================================================================== 00:32:37.230 [2024-11-20T11:46:42.994Z] Total : 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:32:37.230 00:32:37.230 true 00:32:37.230 12:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:37.230 12:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:37.489 12:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:37.489 12:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:37.489 12:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1143744 00:32:38.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.057 Nvme0n1 : 3.00 25146.00 98.23 0.00 0.00 0.00 0.00 0.00 00:32:38.057 [2024-11-20T11:46:43.821Z] =================================================================================================================== 00:32:38.057 [2024-11-20T11:46:43.821Z] Total : 25146.00 98.23 0.00 0.00 0.00 0.00 0.00 00:32:38.057 00:32:39.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.434 Nvme0n1 : 4.00 25241.25 98.60 0.00 0.00 0.00 0.00 0.00 00:32:39.434 [2024-11-20T11:46:45.198Z] =================================================================================================================== 00:32:39.434 [2024-11-20T11:46:45.198Z] Total : 25241.25 98.60 0.00 0.00 0.00 0.00 0.00 00:32:39.434 00:32:40.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.370 Nvme0n1 : 5.00 25323.80 98.92 0.00 0.00 0.00 0.00 0.00 00:32:40.370 [2024-11-20T11:46:46.134Z] =================================================================================================================== 00:32:40.370 [2024-11-20T11:46:46.134Z] Total : 25323.80 98.92 0.00 0.00 0.00 0.00 0.00 00:32:40.370 00:32:41.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.304 Nvme0n1 : 6.00 25357.67 99.05 0.00 0.00 0.00 0.00 0.00 00:32:41.304 [2024-11-20T11:46:47.068Z] =================================================================================================================== 00:32:41.304 [2024-11-20T11:46:47.068Z] Total : 25357.67 99.05 0.00 0.00 0.00 0.00 0.00 00:32:41.304 00:32:42.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.237 Nvme0n1 : 7.00 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:42.237 [2024-11-20T11:46:48.001Z] =================================================================================================================== 00:32:42.237 [2024-11-20T11:46:48.001Z] Total : 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:42.237 00:32:43.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.171 Nvme0n1 : 8.00 25431.75 99.34 0.00 0.00 0.00 0.00 0.00 00:32:43.171 [2024-11-20T11:46:48.935Z] =================================================================================================================== 00:32:43.171 [2024-11-20T11:46:48.935Z] Total : 25431.75 99.34 0.00 0.00 0.00 0.00 0.00 00:32:43.171 00:32:44.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.107 Nvme0n1 : 9.00 25444.22 99.39 0.00 0.00 0.00 0.00 0.00 00:32:44.107 [2024-11-20T11:46:49.871Z] =================================================================================================================== 00:32:44.107 [2024-11-20T11:46:49.871Z] Total : 25444.22 99.39 0.00 0.00 0.00 0.00 0.00 00:32:44.107 00:32:45.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.486 Nvme0n1 : 10.00 25441.50 99.38 0.00 0.00 0.00 0.00 0.00 00:32:45.486 [2024-11-20T11:46:51.250Z] =================================================================================================================== 00:32:45.486 [2024-11-20T11:46:51.250Z] Total : 25441.50 99.38 0.00 0.00 0.00 0.00 0.00 00:32:45.486 00:32:45.486 00:32:45.486 Latency(us) 00:32:45.486 [2024-11-20T11:46:51.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.486 Nvme0n1 : 10.00 25444.46 99.39 0.00 0.00 5027.90 2993.80 26691.03 00:32:45.486 [2024-11-20T11:46:51.250Z] =================================================================================================================== 00:32:45.486 [2024-11-20T11:46:51.250Z] Total : 25444.46 99.39 0.00 0.00 5027.90 2993.80 26691.03 00:32:45.486 { 00:32:45.486 "results": [ 00:32:45.486 { 00:32:45.486 "job": "Nvme0n1", 00:32:45.486 "core_mask": "0x2", 00:32:45.486 "workload": "randwrite", 00:32:45.486 "status": "finished", 00:32:45.486 "queue_depth": 128, 00:32:45.486 "io_size": 4096, 00:32:45.486 "runtime": 10.003201, 00:32:45.486 "iops": 25444.455229880914, 00:32:45.486 "mibps": 99.39240324172232, 00:32:45.486 "io_failed": 0, 00:32:45.486 "io_timeout": 0, 00:32:45.486 "avg_latency_us": 5027.897355697901, 00:32:45.486 "min_latency_us": 2993.8036363636365, 00:32:45.486 "max_latency_us": 26691.025454545455 00:32:45.486 } 00:32:45.486 ], 00:32:45.486 "core_count": 1 00:32:45.486 } 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1143479 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1143479 ']' 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1143479 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1143479 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1143479' 00:32:45.486 killing process with pid 1143479 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1143479 00:32:45.486 Received shutdown signal, test time was about 10.000000 seconds 00:32:45.486 00:32:45.486 Latency(us) 00:32:45.486 [2024-11-20T11:46:51.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.486 [2024-11-20T11:46:51.250Z] =================================================================================================================== 00:32:45.486 [2024-11-20T11:46:51.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.486 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1143479 00:32:45.486 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.486 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.745 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:45.745 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:46.004 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:46.005 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:46.005 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:46.264 [2024-11-20 12:46:51.768547] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:46.264 request: 00:32:46.264 { 00:32:46.264 "uuid": "b13ca0a0-2648-4485-8dce-0d14e074055c", 00:32:46.264 "method": "bdev_lvol_get_lvstores", 00:32:46.264 "req_id": 1 00:32:46.264 } 00:32:46.264 Got JSON-RPC error response 00:32:46.264 response: 00:32:46.264 { 00:32:46.264 "code": -19, 00:32:46.264 "message": "No such device" 00:32:46.264 } 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.264 12:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.264 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:46.523 aio_bdev 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 41dc5fc6-ef36-40f7-a207-f15b08efb046 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=41dc5fc6-ef36-40f7-a207-f15b08efb046 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:46.523 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:46.781 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 41dc5fc6-ef36-40f7-a207-f15b08efb046 -t 2000 00:32:46.781 [ 00:32:46.781 { 00:32:46.781 "name": "41dc5fc6-ef36-40f7-a207-f15b08efb046", 00:32:46.781 "aliases": [ 00:32:46.781 "lvs/lvol" 00:32:46.781 ], 00:32:46.781 "product_name": "Logical Volume", 00:32:46.781 "block_size": 4096, 00:32:46.781 "num_blocks": 38912, 00:32:46.781 "uuid": "41dc5fc6-ef36-40f7-a207-f15b08efb046", 00:32:46.781 "assigned_rate_limits": { 00:32:46.781 "rw_ios_per_sec": 0, 00:32:46.781 "rw_mbytes_per_sec": 0, 00:32:46.781 "r_mbytes_per_sec": 0, 00:32:46.781 "w_mbytes_per_sec": 0 00:32:46.781 }, 00:32:46.781 "claimed": false, 00:32:46.781 "zoned": false, 00:32:46.781 "supported_io_types": { 00:32:46.781 "read": true, 00:32:46.781 "write": true, 00:32:46.781 "unmap": true, 00:32:46.781 "flush": false, 00:32:46.781 "reset": true, 00:32:46.781 "nvme_admin": false, 00:32:46.781 "nvme_io": false, 00:32:46.781 "nvme_io_md": false, 00:32:46.781 "write_zeroes": true, 00:32:46.781 "zcopy": false, 00:32:46.781 "get_zone_info": false, 00:32:46.781 "zone_management": false, 00:32:46.781 "zone_append": false, 00:32:46.781 "compare": false, 00:32:46.781 "compare_and_write": false, 00:32:46.781 "abort": false, 00:32:46.781 "seek_hole": true, 00:32:46.781 "seek_data": true, 00:32:46.781 "copy": false, 00:32:46.781 "nvme_iov_md": false 00:32:46.781 }, 00:32:46.781 "driver_specific": { 00:32:46.781 "lvol": { 00:32:46.781 "lvol_store_uuid": "b13ca0a0-2648-4485-8dce-0d14e074055c", 00:32:46.782 "base_bdev": "aio_bdev", 00:32:46.782 "thin_provision": false, 00:32:46.782 "num_allocated_clusters": 38, 00:32:46.782 "snapshot": false, 00:32:46.782 "clone": false, 00:32:46.782 "esnap_clone": false 00:32:46.782 } 00:32:46.782 } 00:32:46.782 } 00:32:46.782 ] 00:32:46.782 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:46.782 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:46.782 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:47.040 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:47.041 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:47.041 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:47.299 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:47.299 12:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 41dc5fc6-ef36-40f7-a207-f15b08efb046 00:32:47.558 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b13ca0a0-2648-4485-8dce-0d14e074055c 00:32:47.558 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.818 00:32:47.818 real 0m15.886s 00:32:47.818 user 0m15.580s 00:32:47.818 sys 0m1.424s 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:47.818 ************************************ 00:32:47.818 END TEST lvs_grow_clean 00:32:47.818 ************************************ 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.818 ************************************ 00:32:47.818 START TEST lvs_grow_dirty 00:32:47.818 ************************************ 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.818 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:48.076 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:48.076 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:48.335 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:32:48.335 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:32:48.335 12:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:48.594 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:48.594 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:48.594 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e lvol 150 00:32:48.594 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:32:48.594 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:48.594 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:48.853 [2024-11-20 12:46:54.500498] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:48.853 [2024-11-20 12:46:54.500628] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:48.853 true 00:32:48.853 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:32:48.853 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:49.112 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:49.112 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:49.112 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:32:49.371 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.630 [2024-11-20 12:46:55.208825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.630 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1146394 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1146394 /var/tmp/bdevperf.sock 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1146394 ']' 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:49.889 [2024-11-20 12:46:55.446582] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:32:49.889 [2024-11-20 12:46:55.446630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146394 ] 00:32:49.889 [2024-11-20 12:46:55.515949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.889 [2024-11-20 12:46:55.555156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:49.889 12:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:50.458 Nvme0n1 00:32:50.459 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:50.459 [ 00:32:50.459 { 00:32:50.459 "name": "Nvme0n1", 00:32:50.459 "aliases": [ 00:32:50.459 "38c58ee1-bf46-4426-9d1b-3e9a693aa523" 00:32:50.459 ], 00:32:50.459 "product_name": "NVMe disk", 00:32:50.459 "block_size": 4096, 00:32:50.459 "num_blocks": 38912, 00:32:50.459 "uuid": "38c58ee1-bf46-4426-9d1b-3e9a693aa523", 00:32:50.459 "numa_id": 0, 00:32:50.459 "assigned_rate_limits": { 00:32:50.459 "rw_ios_per_sec": 0, 00:32:50.459 "rw_mbytes_per_sec": 0, 00:32:50.459 "r_mbytes_per_sec": 0, 00:32:50.459 "w_mbytes_per_sec": 0 00:32:50.459 }, 00:32:50.459 "claimed": false, 00:32:50.459 "zoned": false, 00:32:50.459 "supported_io_types": { 00:32:50.459 "read": true, 00:32:50.459 "write": true, 00:32:50.459 "unmap": true, 00:32:50.459 "flush": true, 00:32:50.459 "reset": true, 00:32:50.459 "nvme_admin": true, 00:32:50.459 "nvme_io": true, 00:32:50.459 "nvme_io_md": false, 00:32:50.459 "write_zeroes": true, 00:32:50.459 "zcopy": false, 00:32:50.459 "get_zone_info": false, 00:32:50.459 "zone_management": false, 00:32:50.459 "zone_append": false, 00:32:50.459 "compare": true, 00:32:50.459 "compare_and_write": true, 00:32:50.459 "abort": true, 00:32:50.459 "seek_hole": false, 00:32:50.459 "seek_data": false, 00:32:50.459 "copy": true, 00:32:50.459 "nvme_iov_md": false 00:32:50.459 }, 00:32:50.459 "memory_domains": [ 00:32:50.459 { 00:32:50.459 "dma_device_id": "system", 00:32:50.459 "dma_device_type": 1 00:32:50.459 } 00:32:50.459 ], 00:32:50.459 "driver_specific": { 00:32:50.459 "nvme": [ 00:32:50.459 { 00:32:50.459 "trid": { 00:32:50.459 "trtype": "TCP", 00:32:50.459 "adrfam": "IPv4", 00:32:50.459 "traddr": "10.0.0.2", 00:32:50.459 "trsvcid": "4420", 00:32:50.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:50.459 }, 00:32:50.459 "ctrlr_data": { 00:32:50.459 "cntlid": 1, 00:32:50.459 "vendor_id": "0x8086", 00:32:50.459 "model_number": "SPDK bdev Controller", 00:32:50.459 "serial_number": "SPDK0", 00:32:50.459 "firmware_revision": "25.01", 00:32:50.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.459 "oacs": { 00:32:50.459 "security": 0, 00:32:50.459 "format": 0, 00:32:50.459 "firmware": 0, 00:32:50.459 "ns_manage": 0 00:32:50.459 }, 00:32:50.459 "multi_ctrlr": true, 00:32:50.459 "ana_reporting": false 00:32:50.459 }, 00:32:50.459 "vs": { 00:32:50.459 "nvme_version": "1.3" 00:32:50.459 }, 00:32:50.459 "ns_data": { 00:32:50.459 "id": 1, 00:32:50.459 "can_share": true 00:32:50.459 } 00:32:50.459 } 00:32:50.459 ], 00:32:50.459 "mp_policy": "active_passive" 00:32:50.459 } 00:32:50.459 } 00:32:50.459 ] 00:32:50.459 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1146429 00:32:50.459 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:50.459 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:50.718 Running I/O for 10 seconds... 00:32:51.655 Latency(us) 00:32:51.655 [2024-11-20T11:46:57.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.655 Nvme0n1 : 1.00 24638.00 96.24 0.00 0.00 0.00 0.00 0.00 00:32:51.655 [2024-11-20T11:46:57.419Z] =================================================================================================================== 00:32:51.655 [2024-11-20T11:46:57.419Z] Total : 24638.00 96.24 0.00 0.00 0.00 0.00 0.00 00:32:51.655 00:32:52.593 12:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:32:52.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.593 Nvme0n1 : 2.00 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:32:52.593 [2024-11-20T11:46:58.357Z] =================================================================================================================== 00:32:52.593 [2024-11-20T11:46:58.357Z] Total : 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:32:52.593 00:32:52.852 true 00:32:52.852 12:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:32:52.852 12:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:52.852 12:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:52.852 12:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:52.852 12:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1146429 00:32:53.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.787 Nvme0n1 : 3.00 25146.00 98.23 0.00 0.00 0.00 0.00 0.00 00:32:53.787 [2024-11-20T11:46:59.551Z] =================================================================================================================== 00:32:53.787 [2024-11-20T11:46:59.551Z] Total : 25146.00 98.23 0.00 0.00 0.00 0.00 0.00 00:32:53.787 00:32:54.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.723 Nvme0n1 : 4.00 25209.50 98.47 0.00 0.00 0.00 0.00 0.00 00:32:54.723 [2024-11-20T11:47:00.487Z] =================================================================================================================== 00:32:54.723 [2024-11-20T11:47:00.487Z] Total : 25209.50 98.47 0.00 0.00 0.00 0.00 0.00 00:32:54.723 00:32:55.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.658 Nvme0n1 : 5.00 25273.00 98.72 0.00 0.00 0.00 0.00 0.00 00:32:55.658 [2024-11-20T11:47:01.422Z] =================================================================================================================== 00:32:55.658 [2024-11-20T11:47:01.422Z] Total : 25273.00 98.72 0.00 0.00 0.00 0.00 0.00 00:32:55.658 00:32:56.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.592 Nvme0n1 : 6.00 25336.50 98.97 0.00 0.00 0.00 0.00 0.00 00:32:56.592 [2024-11-20T11:47:02.356Z] =================================================================================================================== 00:32:56.592 [2024-11-20T11:47:02.356Z] Total : 25336.50 98.97 0.00 0.00 0.00 0.00 0.00 00:32:56.592 00:32:57.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.968 Nvme0n1 : 7.00 25366.14 99.09 0.00 0.00 0.00 0.00 0.00 00:32:57.968 [2024-11-20T11:47:03.732Z] =================================================================================================================== 00:32:57.968 [2024-11-20T11:47:03.732Z] Total : 25366.14 99.09 0.00 0.00 0.00 0.00 0.00 00:32:57.968 00:32:58.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.914 Nvme0n1 : 8.00 25402.12 99.23 0.00 0.00 0.00 0.00 0.00 00:32:58.914 [2024-11-20T11:47:04.678Z] =================================================================================================================== 00:32:58.914 [2024-11-20T11:47:04.678Z] Total : 25402.12 99.23 0.00 0.00 0.00 0.00 0.00 00:32:58.914 00:32:59.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.850 Nvme0n1 : 9.00 25430.11 99.34 0.00 0.00 0.00 0.00 0.00 00:32:59.850 [2024-11-20T11:47:05.614Z] =================================================================================================================== 00:32:59.850 [2024-11-20T11:47:05.614Z] Total : 25430.11 99.34 0.00 0.00 0.00 0.00 0.00 00:32:59.850 00:33:00.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.790 Nvme0n1 : 10.00 25452.50 99.42 0.00 0.00 0.00 0.00 0.00 00:33:00.790 [2024-11-20T11:47:06.554Z] =================================================================================================================== 00:33:00.790 [2024-11-20T11:47:06.554Z] Total : 25452.50 99.42 0.00 0.00 0.00 0.00 0.00 00:33:00.790 00:33:00.790 00:33:00.790 Latency(us) 00:33:00.790 [2024-11-20T11:47:06.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.790 Nvme0n1 : 10.00 25454.83 99.43 0.00 0.00 5026.05 2993.80 27405.96 00:33:00.790 [2024-11-20T11:47:06.554Z] =================================================================================================================== 00:33:00.790 [2024-11-20T11:47:06.554Z] Total : 25454.83 99.43 0.00 0.00 5026.05 2993.80 27405.96 00:33:00.790 { 00:33:00.790 "results": [ 00:33:00.790 { 00:33:00.790 "job": "Nvme0n1", 00:33:00.790 "core_mask": "0x2", 00:33:00.790 "workload": "randwrite", 00:33:00.790 "status": "finished", 00:33:00.790 "queue_depth": 128, 00:33:00.790 "io_size": 4096, 00:33:00.790 "runtime": 10.004115, 00:33:00.790 "iops": 25454.82533937285, 00:33:00.790 "mibps": 99.43291148192519, 00:33:00.790 "io_failed": 0, 00:33:00.790 "io_timeout": 0, 00:33:00.790 "avg_latency_us": 5026.050961126067, 00:33:00.790 "min_latency_us": 2993.8036363636365, 00:33:00.790 "max_latency_us": 27405.963636363635 00:33:00.790 } 00:33:00.790 ], 00:33:00.790 "core_count": 1 00:33:00.790 } 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1146394 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1146394 ']' 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1146394 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146394 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146394' 00:33:00.790 killing process with pid 1146394 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1146394 00:33:00.790 Received shutdown signal, test time was about 10.000000 seconds 00:33:00.790 00:33:00.790 Latency(us) 00:33:00.790 [2024-11-20T11:47:06.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.790 [2024-11-20T11:47:06.554Z] =================================================================================================================== 00:33:00.790 [2024-11-20T11:47:06.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.790 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1146394 00:33:01.049 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:01.049 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:01.309 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:01.309 12:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1143162 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1143162 00:33:01.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1143162 Killed "${NVMF_APP[@]}" "$@" 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1148274 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1148274 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1148274 ']' 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.569 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.569 [2024-11-20 12:47:07.194794] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:01.569 [2024-11-20 12:47:07.195625] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:01.569 [2024-11-20 12:47:07.195659] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.569 [2024-11-20 12:47:07.270060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.569 [2024-11-20 12:47:07.307378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.569 [2024-11-20 12:47:07.307418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.569 [2024-11-20 12:47:07.307425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.569 [2024-11-20 12:47:07.307431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.569 [2024-11-20 12:47:07.307435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.569 [2024-11-20 12:47:07.308038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.828 [2024-11-20 12:47:07.372890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:01.828 [2024-11-20 12:47:07.373096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:02.397 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.397 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:02.397 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:02.397 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.397 12:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:02.397 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.397 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:02.656 [2024-11-20 12:47:08.197400] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:02.656 [2024-11-20 12:47:08.197611] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:02.656 [2024-11-20 12:47:08.197693] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:02.656 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38c58ee1-bf46-4426-9d1b-3e9a693aa523 -t 2000 00:33:02.915 [ 00:33:02.915 { 00:33:02.915 "name": "38c58ee1-bf46-4426-9d1b-3e9a693aa523", 00:33:02.915 "aliases": [ 00:33:02.915 "lvs/lvol" 00:33:02.915 ], 00:33:02.915 "product_name": "Logical Volume", 00:33:02.915 "block_size": 4096, 00:33:02.915 "num_blocks": 38912, 00:33:02.915 "uuid": "38c58ee1-bf46-4426-9d1b-3e9a693aa523", 00:33:02.915 "assigned_rate_limits": { 00:33:02.915 "rw_ios_per_sec": 0, 00:33:02.915 "rw_mbytes_per_sec": 0, 00:33:02.915 "r_mbytes_per_sec": 0, 00:33:02.915 "w_mbytes_per_sec": 0 00:33:02.915 }, 00:33:02.915 "claimed": false, 00:33:02.915 "zoned": false, 00:33:02.915 "supported_io_types": { 00:33:02.915 "read": true, 00:33:02.915 "write": true, 00:33:02.915 "unmap": true, 00:33:02.915 "flush": false, 00:33:02.915 "reset": true, 00:33:02.915 "nvme_admin": false, 00:33:02.915 "nvme_io": false, 00:33:02.915 "nvme_io_md": false, 00:33:02.915 "write_zeroes": true, 00:33:02.915 "zcopy": false, 00:33:02.915 "get_zone_info": false, 00:33:02.915 "zone_management": false, 00:33:02.915 "zone_append": false, 00:33:02.915 "compare": false, 00:33:02.915 "compare_and_write": false, 00:33:02.915 "abort": false, 00:33:02.915 "seek_hole": true, 00:33:02.915 "seek_data": true, 00:33:02.915 "copy": false, 00:33:02.915 "nvme_iov_md": false 00:33:02.915 }, 00:33:02.915 "driver_specific": { 00:33:02.915 "lvol": { 00:33:02.915 "lvol_store_uuid": "ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e", 00:33:02.915 "base_bdev": "aio_bdev", 00:33:02.915 "thin_provision": false, 00:33:02.915 "num_allocated_clusters": 38, 00:33:02.915 "snapshot": false, 00:33:02.915 "clone": false, 00:33:02.915 "esnap_clone": false 00:33:02.915 } 00:33:02.915 } 00:33:02.915 } 00:33:02.915 ] 00:33:02.915 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:02.915 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:02.915 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:03.175 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:03.175 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:03.175 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:03.175 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:03.175 12:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:03.435 [2024-11-20 12:47:09.076602] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:03.435 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:03.693 request: 00:33:03.693 { 00:33:03.693 "uuid": "ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e", 00:33:03.693 "method": "bdev_lvol_get_lvstores", 00:33:03.693 "req_id": 1 00:33:03.693 } 00:33:03.693 Got JSON-RPC error response 00:33:03.693 response: 00:33:03.693 { 00:33:03.693 "code": -19, 00:33:03.693 "message": "No such device" 00:33:03.693 } 00:33:03.694 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:03.694 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.694 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.694 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.694 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:03.952 aio_bdev 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:03.952 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38c58ee1-bf46-4426-9d1b-3e9a693aa523 -t 2000 00:33:04.212 [ 00:33:04.212 { 00:33:04.212 "name": "38c58ee1-bf46-4426-9d1b-3e9a693aa523", 00:33:04.212 "aliases": [ 00:33:04.212 "lvs/lvol" 00:33:04.212 ], 00:33:04.212 "product_name": "Logical Volume", 00:33:04.212 "block_size": 4096, 00:33:04.212 "num_blocks": 38912, 00:33:04.212 "uuid": "38c58ee1-bf46-4426-9d1b-3e9a693aa523", 00:33:04.212 "assigned_rate_limits": { 00:33:04.212 "rw_ios_per_sec": 0, 00:33:04.212 "rw_mbytes_per_sec": 0, 00:33:04.212 "r_mbytes_per_sec": 0, 00:33:04.212 "w_mbytes_per_sec": 0 00:33:04.212 }, 00:33:04.212 "claimed": false, 00:33:04.212 "zoned": false, 00:33:04.212 "supported_io_types": { 00:33:04.212 "read": true, 00:33:04.212 "write": true, 00:33:04.212 "unmap": true, 00:33:04.212 "flush": false, 00:33:04.212 "reset": true, 00:33:04.212 "nvme_admin": false, 00:33:04.212 "nvme_io": false, 00:33:04.212 "nvme_io_md": false, 00:33:04.212 "write_zeroes": true, 00:33:04.212 "zcopy": false, 00:33:04.212 "get_zone_info": false, 00:33:04.212 "zone_management": false, 00:33:04.212 "zone_append": false, 00:33:04.212 "compare": false, 00:33:04.212 "compare_and_write": false, 00:33:04.212 "abort": false, 00:33:04.212 "seek_hole": true, 00:33:04.212 "seek_data": true, 00:33:04.212 "copy": false, 00:33:04.212 "nvme_iov_md": false 00:33:04.212 }, 00:33:04.212 "driver_specific": { 00:33:04.212 "lvol": { 00:33:04.212 "lvol_store_uuid": "ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e", 00:33:04.212 "base_bdev": "aio_bdev", 00:33:04.212 "thin_provision": false, 00:33:04.212 "num_allocated_clusters": 38, 00:33:04.212 "snapshot": false, 00:33:04.212 "clone": false, 00:33:04.212 "esnap_clone": false 00:33:04.212 } 00:33:04.212 } 00:33:04.212 } 00:33:04.212 ] 00:33:04.212 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:04.212 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:04.212 12:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:04.472 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:04.472 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:04.472 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:04.472 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:04.472 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38c58ee1-bf46-4426-9d1b-3e9a693aa523 00:33:04.731 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ffbe1f5a-e59b-4c5d-9200-1a46cfa4b38e 00:33:04.990 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:05.249 00:33:05.249 real 0m17.280s 00:33:05.249 user 0m34.566s 00:33:05.249 sys 0m3.365s 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:05.249 ************************************ 00:33:05.249 END TEST lvs_grow_dirty 00:33:05.249 ************************************ 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:05.249 nvmf_trace.0 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:05.249 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.250 rmmod nvme_tcp 00:33:05.250 rmmod nvme_fabrics 00:33:05.250 rmmod nvme_keyring 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1148274 ']' 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1148274 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1148274 ']' 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1148274 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:05.250 12:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.250 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148274 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148274' 00:33:05.509 killing process with pid 1148274 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1148274 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1148274 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.509 12:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.046 00:33:08.046 real 0m42.577s 00:33:08.046 user 0m52.733s 00:33:08.046 sys 0m9.825s 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.046 ************************************ 00:33:08.046 END TEST nvmf_lvs_grow 00:33:08.046 ************************************ 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.046 ************************************ 00:33:08.046 START TEST nvmf_bdev_io_wait 00:33:08.046 ************************************ 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:08.046 * Looking for test storage... 00:33:08.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:08.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.046 --rc genhtml_branch_coverage=1 00:33:08.046 --rc genhtml_function_coverage=1 00:33:08.046 --rc genhtml_legend=1 00:33:08.046 --rc geninfo_all_blocks=1 00:33:08.046 --rc geninfo_unexecuted_blocks=1 00:33:08.046 00:33:08.046 ' 00:33:08.046 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:08.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.046 --rc genhtml_branch_coverage=1 00:33:08.046 --rc genhtml_function_coverage=1 00:33:08.046 --rc genhtml_legend=1 00:33:08.046 --rc geninfo_all_blocks=1 00:33:08.046 --rc geninfo_unexecuted_blocks=1 00:33:08.046 00:33:08.046 ' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:08.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.047 --rc genhtml_branch_coverage=1 00:33:08.047 --rc genhtml_function_coverage=1 00:33:08.047 --rc genhtml_legend=1 00:33:08.047 --rc geninfo_all_blocks=1 00:33:08.047 --rc geninfo_unexecuted_blocks=1 00:33:08.047 00:33:08.047 ' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:08.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.047 --rc genhtml_branch_coverage=1 00:33:08.047 --rc genhtml_function_coverage=1 00:33:08.047 --rc genhtml_legend=1 00:33:08.047 --rc geninfo_all_blocks=1 00:33:08.047 --rc geninfo_unexecuted_blocks=1 00:33:08.047 00:33:08.047 ' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.047 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.048 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:33:14.635 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:33:14.635 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:33:14.635 Found net devices under 0000:1a:00.0: cvl_0_0 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:33:14.635 Found net devices under 0000:1a:00.1: cvl_0_1 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:14.635 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:33:14.636 00:33:14.636 --- 10.0.0.2 ping statistics --- 00:33:14.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.636 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:33:14.636 00:33:14.636 --- 10.0.0.1 ping statistics --- 00:33:14.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.636 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1152677 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1152677 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1152677 ']' 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.636 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.636 [2024-11-20 12:47:19.754907] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:14.636 [2024-11-20 12:47:19.755782] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:14.636 [2024-11-20 12:47:19.755813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.636 [2024-11-20 12:47:19.833323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.636 [2024-11-20 12:47:19.873971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.636 [2024-11-20 12:47:19.874008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.636 [2024-11-20 12:47:19.874015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.636 [2024-11-20 12:47:19.874020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.636 [2024-11-20 12:47:19.874025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.636 [2024-11-20 12:47:19.875533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.636 [2024-11-20 12:47:19.875567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.636 [2024-11-20 12:47:19.875655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.636 [2024-11-20 12:47:19.875656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.636 [2024-11-20 12:47:19.876252] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.896 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 [2024-11-20 12:47:20.680805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:15.156 [2024-11-20 12:47:20.681614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:15.156 [2024-11-20 12:47:20.681658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:15.156 [2024-11-20 12:47:20.681809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 [2024-11-20 12:47:20.692420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 Malloc0 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 [2024-11-20 12:47:20.760680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1152914 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1152916 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.156 { 00:33:15.156 "params": { 00:33:15.156 "name": "Nvme$subsystem", 00:33:15.156 "trtype": "$TEST_TRANSPORT", 00:33:15.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.156 "adrfam": "ipv4", 00:33:15.156 "trsvcid": "$NVMF_PORT", 00:33:15.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.156 "hdgst": ${hdgst:-false}, 00:33:15.156 "ddgst": ${ddgst:-false} 00:33:15.156 }, 00:33:15.156 "method": "bdev_nvme_attach_controller" 00:33:15.156 } 00:33:15.156 EOF 00:33:15.156 )") 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1152918 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.156 { 00:33:15.156 "params": { 00:33:15.156 "name": "Nvme$subsystem", 00:33:15.156 "trtype": "$TEST_TRANSPORT", 00:33:15.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.156 "adrfam": "ipv4", 00:33:15.156 "trsvcid": "$NVMF_PORT", 00:33:15.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.156 "hdgst": ${hdgst:-false}, 00:33:15.156 "ddgst": ${ddgst:-false} 00:33:15.156 }, 00:33:15.156 "method": "bdev_nvme_attach_controller" 00:33:15.156 } 00:33:15.156 EOF 00:33:15.156 )") 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1152921 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:15.156 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.156 { 00:33:15.156 "params": { 00:33:15.156 "name": "Nvme$subsystem", 00:33:15.156 "trtype": "$TEST_TRANSPORT", 00:33:15.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.156 "adrfam": "ipv4", 00:33:15.156 "trsvcid": "$NVMF_PORT", 00:33:15.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.157 "hdgst": ${hdgst:-false}, 00:33:15.157 "ddgst": ${ddgst:-false} 00:33:15.157 }, 00:33:15.157 "method": "bdev_nvme_attach_controller" 00:33:15.157 } 00:33:15.157 EOF 00:33:15.157 )") 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.157 { 00:33:15.157 "params": { 00:33:15.157 "name": "Nvme$subsystem", 00:33:15.157 "trtype": "$TEST_TRANSPORT", 00:33:15.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.157 "adrfam": "ipv4", 00:33:15.157 "trsvcid": "$NVMF_PORT", 00:33:15.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.157 "hdgst": ${hdgst:-false}, 00:33:15.157 "ddgst": ${ddgst:-false} 00:33:15.157 }, 00:33:15.157 "method": "bdev_nvme_attach_controller" 00:33:15.157 } 00:33:15.157 EOF 00:33:15.157 )") 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1152914 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.157 "params": { 00:33:15.157 "name": "Nvme1", 00:33:15.157 "trtype": "tcp", 00:33:15.157 "traddr": "10.0.0.2", 00:33:15.157 "adrfam": "ipv4", 00:33:15.157 "trsvcid": "4420", 00:33:15.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.157 "hdgst": false, 00:33:15.157 "ddgst": false 00:33:15.157 }, 00:33:15.157 "method": "bdev_nvme_attach_controller" 00:33:15.157 }' 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.157 "params": { 00:33:15.157 "name": "Nvme1", 00:33:15.157 "trtype": "tcp", 00:33:15.157 "traddr": "10.0.0.2", 00:33:15.157 "adrfam": "ipv4", 00:33:15.157 "trsvcid": "4420", 00:33:15.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.157 "hdgst": false, 00:33:15.157 "ddgst": false 00:33:15.157 }, 00:33:15.157 "method": "bdev_nvme_attach_controller" 00:33:15.157 }' 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.157 "params": { 00:33:15.157 "name": "Nvme1", 00:33:15.157 "trtype": "tcp", 00:33:15.157 "traddr": "10.0.0.2", 00:33:15.157 "adrfam": "ipv4", 00:33:15.157 "trsvcid": "4420", 00:33:15.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.157 "hdgst": false, 00:33:15.157 "ddgst": false 00:33:15.157 }, 00:33:15.157 "method": "bdev_nvme_attach_controller" 00:33:15.157 }' 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:15.157 12:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.157 "params": { 00:33:15.157 "name": "Nvme1", 00:33:15.157 "trtype": "tcp", 00:33:15.157 "traddr": "10.0.0.2", 00:33:15.157 "adrfam": "ipv4", 00:33:15.157 "trsvcid": "4420", 00:33:15.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.157 "hdgst": false, 00:33:15.157 "ddgst": false 00:33:15.157 }, 00:33:15.157 "method": "bdev_nvme_attach_controller" 00:33:15.157 }' 00:33:15.157 [2024-11-20 12:47:20.810684] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:15.157 [2024-11-20 12:47:20.810732] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:15.157 [2024-11-20 12:47:20.812249] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:15.157 [2024-11-20 12:47:20.812288] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:15.157 [2024-11-20 12:47:20.812317] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:15.157 [2024-11-20 12:47:20.812323] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:15.157 [2024-11-20 12:47:20.812361] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 12:47:20.812361] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:15.157 --proc-type=auto ] 00:33:15.416 [2024-11-20 12:47:20.987121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.416 [2024-11-20 12:47:21.027337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:15.416 [2024-11-20 12:47:21.080349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.416 [2024-11-20 12:47:21.131867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:15.416 [2024-11-20 12:47:21.133010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.416 [2024-11-20 12:47:21.173742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:15.675 [2024-11-20 12:47:21.190158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.675 [2024-11-20 12:47:21.229817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:15.675 Running I/O for 1 seconds... 00:33:15.675 Running I/O for 1 seconds... 00:33:15.675 Running I/O for 1 seconds... 00:33:15.933 Running I/O for 1 seconds... 00:33:16.870 15377.00 IOPS, 60.07 MiB/s 00:33:16.870 Latency(us) 00:33:16.870 [2024-11-20T11:47:22.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.870 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:16.870 Nvme1n1 : 1.01 15441.47 60.32 0.00 0.00 8267.51 3217.22 9592.09 00:33:16.870 [2024-11-20T11:47:22.634Z] =================================================================================================================== 00:33:16.870 [2024-11-20T11:47:22.634Z] Total : 15441.47 60.32 0.00 0.00 8267.51 3217.22 9592.09 00:33:16.870 7803.00 IOPS, 30.48 MiB/s 00:33:16.870 Latency(us) 00:33:16.870 [2024-11-20T11:47:22.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.870 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:16.870 Nvme1n1 : 1.01 7854.26 30.68 0.00 0.00 16199.38 4379.00 24546.21 00:33:16.870 [2024-11-20T11:47:22.634Z] =================================================================================================================== 00:33:16.870 [2024-11-20T11:47:22.634Z] Total : 7854.26 30.68 0.00 0.00 16199.38 4379.00 24546.21 00:33:16.870 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1152916 00:33:16.870 251168.00 IOPS, 981.12 MiB/s 00:33:16.870 Latency(us) 00:33:16.870 [2024-11-20T11:47:22.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.870 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:16.871 Nvme1n1 : 1.00 250781.77 979.62 0.00 0.00 507.76 238.31 1511.80 00:33:16.871 [2024-11-20T11:47:22.635Z] =================================================================================================================== 00:33:16.871 [2024-11-20T11:47:22.635Z] Total : 250781.77 979.62 0.00 0.00 507.76 238.31 1511.80 00:33:16.871 8466.00 IOPS, 33.07 MiB/s 00:33:16.871 Latency(us) 00:33:16.871 [2024-11-20T11:47:22.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.871 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:16.871 Nvme1n1 : 1.00 8559.00 33.43 0.00 0.00 14921.27 3440.64 31933.91 00:33:16.871 [2024-11-20T11:47:22.635Z] =================================================================================================================== 00:33:16.871 [2024-11-20T11:47:22.635Z] Total : 8559.00 33.43 0.00 0.00 14921.27 3440.64 31933.91 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1152918 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1152921 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.871 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.871 rmmod nvme_tcp 00:33:16.871 rmmod nvme_fabrics 00:33:17.131 rmmod nvme_keyring 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1152677 ']' 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1152677 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1152677 ']' 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1152677 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152677 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152677' 00:33:17.131 killing process with pid 1152677 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1152677 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1152677 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.131 12:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:19.766 00:33:19.766 real 0m11.555s 00:33:19.766 user 0m15.059s 00:33:19.766 sys 0m6.452s 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:19.766 ************************************ 00:33:19.766 END TEST nvmf_bdev_io_wait 00:33:19.766 ************************************ 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.766 12:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:19.766 ************************************ 00:33:19.766 START TEST nvmf_queue_depth 00:33:19.766 ************************************ 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:19.766 * Looking for test storage... 00:33:19.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.766 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:19.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.766 --rc genhtml_branch_coverage=1 00:33:19.766 --rc genhtml_function_coverage=1 00:33:19.767 --rc genhtml_legend=1 00:33:19.767 --rc geninfo_all_blocks=1 00:33:19.767 --rc geninfo_unexecuted_blocks=1 00:33:19.767 00:33:19.767 ' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:19.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.767 --rc genhtml_branch_coverage=1 00:33:19.767 --rc genhtml_function_coverage=1 00:33:19.767 --rc genhtml_legend=1 00:33:19.767 --rc geninfo_all_blocks=1 00:33:19.767 --rc geninfo_unexecuted_blocks=1 00:33:19.767 00:33:19.767 ' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:19.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.767 --rc genhtml_branch_coverage=1 00:33:19.767 --rc genhtml_function_coverage=1 00:33:19.767 --rc genhtml_legend=1 00:33:19.767 --rc geninfo_all_blocks=1 00:33:19.767 --rc geninfo_unexecuted_blocks=1 00:33:19.767 00:33:19.767 ' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:19.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.767 --rc genhtml_branch_coverage=1 00:33:19.767 --rc genhtml_function_coverage=1 00:33:19.767 --rc genhtml_legend=1 00:33:19.767 --rc geninfo_all_blocks=1 00:33:19.767 --rc geninfo_unexecuted_blocks=1 00:33:19.767 00:33:19.767 ' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.767 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:26.335 12:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:33:26.335 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:33:26.335 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:33:26.335 Found net devices under 0000:1a:00.0: cvl_0_0 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:33:26.335 Found net devices under 0000:1a:00.1: cvl_0_1 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.335 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:33:26.336 00:33:26.336 --- 10.0.0.2 ping statistics --- 00:33:26.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.336 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:33:26.336 00:33:26.336 --- 10.0.0.1 ping statistics --- 00:33:26.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.336 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1156973 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1156973 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1156973 ']' 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.336 12:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.336 [2024-11-20 12:47:31.364649] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:26.336 [2024-11-20 12:47:31.365505] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:26.336 [2024-11-20 12:47:31.365535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.336 [2024-11-20 12:47:31.444881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.336 [2024-11-20 12:47:31.480747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.336 [2024-11-20 12:47:31.480778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.336 [2024-11-20 12:47:31.480784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.336 [2024-11-20 12:47:31.480789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.336 [2024-11-20 12:47:31.480793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.336 [2024-11-20 12:47:31.481316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.336 [2024-11-20 12:47:31.545823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:26.336 [2024-11-20 12:47:31.546019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.595 [2024-11-20 12:47:32.229972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.595 Malloc0 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.595 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.596 [2024-11-20 12:47:32.309973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1157129 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1157129 /var/tmp/bdevperf.sock 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1157129 ']' 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.596 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.855 [2024-11-20 12:47:32.359547] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:26.855 [2024-11-20 12:47:32.359591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157129 ] 00:33:26.855 [2024-11-20 12:47:32.432856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.855 [2024-11-20 12:47:32.472218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.855 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.855 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:26.855 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:26.855 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.855 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.114 NVMe0n1 00:33:27.114 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.114 12:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:27.114 Running I/O for 10 seconds... 00:33:29.428 12808.00 IOPS, 50.03 MiB/s [2024-11-20T11:47:36.127Z] 13295.50 IOPS, 51.94 MiB/s [2024-11-20T11:47:37.063Z] 13316.33 IOPS, 52.02 MiB/s [2024-11-20T11:47:38.000Z] 13342.00 IOPS, 52.12 MiB/s [2024-11-20T11:47:38.936Z] 13460.00 IOPS, 52.58 MiB/s [2024-11-20T11:47:39.871Z] 13481.83 IOPS, 52.66 MiB/s [2024-11-20T11:47:40.806Z] 13515.00 IOPS, 52.79 MiB/s [2024-11-20T11:47:42.182Z] 13549.38 IOPS, 52.93 MiB/s [2024-11-20T11:47:43.118Z] 13561.56 IOPS, 52.97 MiB/s [2024-11-20T11:47:43.118Z] 13604.70 IOPS, 53.14 MiB/s 00:33:37.354 Latency(us) 00:33:37.354 [2024-11-20T11:47:43.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.354 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:37.354 Verification LBA range: start 0x0 length 0x4000 00:33:37.354 NVMe0n1 : 10.06 13622.64 53.21 0.00 0.00 74934.50 17396.83 48854.11 00:33:37.354 [2024-11-20T11:47:43.118Z] =================================================================================================================== 00:33:37.354 [2024-11-20T11:47:43.118Z] Total : 13622.64 53.21 0.00 0.00 74934.50 17396.83 48854.11 00:33:37.354 { 00:33:37.354 "results": [ 00:33:37.354 { 00:33:37.354 "job": "NVMe0n1", 00:33:37.354 "core_mask": "0x1", 00:33:37.354 "workload": "verify", 00:33:37.354 "status": "finished", 00:33:37.354 "verify_range": { 00:33:37.354 "start": 0, 00:33:37.354 "length": 16384 00:33:37.354 }, 00:33:37.354 "queue_depth": 1024, 00:33:37.354 "io_size": 4096, 00:33:37.354 "runtime": 10.058769, 00:33:37.354 "iops": 13622.641100516375, 00:33:37.354 "mibps": 53.21344179889209, 00:33:37.354 "io_failed": 0, 00:33:37.354 "io_timeout": 0, 00:33:37.354 "avg_latency_us": 74934.49720435985, 00:33:37.354 "min_latency_us": 17396.82909090909, 00:33:37.354 "max_latency_us": 48854.10909090909 00:33:37.354 } 00:33:37.354 ], 00:33:37.354 "core_count": 1 00:33:37.354 } 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1157129 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1157129 ']' 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1157129 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1157129 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1157129' 00:33:37.354 killing process with pid 1157129 00:33:37.354 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1157129 00:33:37.354 Received shutdown signal, test time was about 10.000000 seconds 00:33:37.354 00:33:37.354 Latency(us) 00:33:37.354 [2024-11-20T11:47:43.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.354 [2024-11-20T11:47:43.118Z] =================================================================================================================== 00:33:37.354 [2024-11-20T11:47:43.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.355 12:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1157129 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.355 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.355 rmmod nvme_tcp 00:33:37.355 rmmod nvme_fabrics 00:33:37.355 rmmod nvme_keyring 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1156973 ']' 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1156973 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1156973 ']' 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1156973 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1156973 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1156973' 00:33:37.614 killing process with pid 1156973 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1156973 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1156973 00:33:37.614 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.615 12:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.150 00:33:40.150 real 0m20.412s 00:33:40.150 user 0m23.062s 00:33:40.150 sys 0m6.140s 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:40.150 ************************************ 00:33:40.150 END TEST nvmf_queue_depth 00:33:40.150 ************************************ 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:40.150 ************************************ 00:33:40.150 START TEST nvmf_target_multipath 00:33:40.150 ************************************ 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:40.150 * Looking for test storage... 00:33:40.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.150 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.151 --rc genhtml_branch_coverage=1 00:33:40.151 --rc genhtml_function_coverage=1 00:33:40.151 --rc genhtml_legend=1 00:33:40.151 --rc geninfo_all_blocks=1 00:33:40.151 --rc geninfo_unexecuted_blocks=1 00:33:40.151 00:33:40.151 ' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.151 --rc genhtml_branch_coverage=1 00:33:40.151 --rc genhtml_function_coverage=1 00:33:40.151 --rc genhtml_legend=1 00:33:40.151 --rc geninfo_all_blocks=1 00:33:40.151 --rc geninfo_unexecuted_blocks=1 00:33:40.151 00:33:40.151 ' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.151 --rc genhtml_branch_coverage=1 00:33:40.151 --rc genhtml_function_coverage=1 00:33:40.151 --rc genhtml_legend=1 00:33:40.151 --rc geninfo_all_blocks=1 00:33:40.151 --rc geninfo_unexecuted_blocks=1 00:33:40.151 00:33:40.151 ' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.151 --rc genhtml_branch_coverage=1 00:33:40.151 --rc genhtml_function_coverage=1 00:33:40.151 --rc genhtml_legend=1 00:33:40.151 --rc geninfo_all_blocks=1 00:33:40.151 --rc geninfo_unexecuted_blocks=1 00:33:40.151 00:33:40.151 ' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:40.151 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.152 12:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:33:46.723 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:33:46.723 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:33:46.723 Found net devices under 0000:1a:00.0: cvl_0_0 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:33:46.723 Found net devices under 0000:1a:00.1: cvl_0_1 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.723 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:33:46.724 00:33:46.724 --- 10.0.0.2 ping statistics --- 00:33:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.724 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:33:46.724 00:33:46.724 --- 10.0.0.1 ping statistics --- 00:33:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.724 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:46.724 only one NIC for nvmf test 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.724 rmmod nvme_tcp 00:33:46.724 rmmod nvme_fabrics 00:33:46.724 rmmod nvme_keyring 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.724 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.629 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.629 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:48.629 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:48.629 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.630 00:33:48.630 real 0m8.546s 00:33:48.630 user 0m1.910s 00:33:48.630 sys 0m4.633s 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:48.630 ************************************ 00:33:48.630 END TEST nvmf_target_multipath 00:33:48.630 ************************************ 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:48.630 ************************************ 00:33:48.630 START TEST nvmf_zcopy 00:33:48.630 ************************************ 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:48.630 * Looking for test storage... 00:33:48.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:48.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.630 --rc genhtml_branch_coverage=1 00:33:48.630 --rc genhtml_function_coverage=1 00:33:48.630 --rc genhtml_legend=1 00:33:48.630 --rc geninfo_all_blocks=1 00:33:48.630 --rc geninfo_unexecuted_blocks=1 00:33:48.630 00:33:48.630 ' 00:33:48.630 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:48.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.630 --rc genhtml_branch_coverage=1 00:33:48.630 --rc genhtml_function_coverage=1 00:33:48.630 --rc genhtml_legend=1 00:33:48.630 --rc geninfo_all_blocks=1 00:33:48.630 --rc geninfo_unexecuted_blocks=1 00:33:48.630 00:33:48.630 ' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:48.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.631 --rc genhtml_branch_coverage=1 00:33:48.631 --rc genhtml_function_coverage=1 00:33:48.631 --rc genhtml_legend=1 00:33:48.631 --rc geninfo_all_blocks=1 00:33:48.631 --rc geninfo_unexecuted_blocks=1 00:33:48.631 00:33:48.631 ' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:48.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.631 --rc genhtml_branch_coverage=1 00:33:48.631 --rc genhtml_function_coverage=1 00:33:48.631 --rc genhtml_legend=1 00:33:48.631 --rc geninfo_all_blocks=1 00:33:48.631 --rc geninfo_unexecuted_blocks=1 00:33:48.631 00:33:48.631 ' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.631 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:33:55.202 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:33:55.202 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:33:55.202 Found net devices under 0000:1a:00.0: cvl_0_0 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:33:55.202 Found net devices under 0000:1a:00.1: cvl_0_1 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:55.202 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:55.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:33:55.203 00:33:55.203 --- 10.0.0.2 ping statistics --- 00:33:55.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.203 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:33:55.203 00:33:55.203 --- 10.0.0.1 ping statistics --- 00:33:55.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.203 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1166376 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1166376 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1166376 ']' 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.203 12:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.203 [2024-11-20 12:48:00.509898] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:55.203 [2024-11-20 12:48:00.510794] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:55.203 [2024-11-20 12:48:00.510829] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.203 [2024-11-20 12:48:00.590015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.203 [2024-11-20 12:48:00.627323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.203 [2024-11-20 12:48:00.627358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.203 [2024-11-20 12:48:00.627365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.203 [2024-11-20 12:48:00.627371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.203 [2024-11-20 12:48:00.627376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.203 [2024-11-20 12:48:00.627939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.203 [2024-11-20 12:48:00.692951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:55.203 [2024-11-20 12:48:00.693165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 [2024-11-20 12:48:01.368618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 [2024-11-20 12:48:01.396864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 malloc0 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:55.772 { 00:33:55.772 "params": { 00:33:55.772 "name": "Nvme$subsystem", 00:33:55.772 "trtype": "$TEST_TRANSPORT", 00:33:55.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.772 "adrfam": "ipv4", 00:33:55.772 "trsvcid": "$NVMF_PORT", 00:33:55.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.772 "hdgst": ${hdgst:-false}, 00:33:55.772 "ddgst": ${ddgst:-false} 00:33:55.772 }, 00:33:55.772 "method": "bdev_nvme_attach_controller" 00:33:55.772 } 00:33:55.772 EOF 00:33:55.772 )") 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:55.772 12:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:55.772 "params": { 00:33:55.772 "name": "Nvme1", 00:33:55.772 "trtype": "tcp", 00:33:55.772 "traddr": "10.0.0.2", 00:33:55.772 "adrfam": "ipv4", 00:33:55.772 "trsvcid": "4420", 00:33:55.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.772 "hdgst": false, 00:33:55.772 "ddgst": false 00:33:55.772 }, 00:33:55.772 "method": "bdev_nvme_attach_controller" 00:33:55.772 }' 00:33:55.772 [2024-11-20 12:48:01.490735] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:55.772 [2024-11-20 12:48:01.490775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166489 ] 00:33:56.031 [2024-11-20 12:48:01.565127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.031 [2024-11-20 12:48:01.603224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.031 Running I/O for 10 seconds... 00:33:58.343 9311.00 IOPS, 72.74 MiB/s [2024-11-20T11:48:05.043Z] 9408.50 IOPS, 73.50 MiB/s [2024-11-20T11:48:05.978Z] 9457.33 IOPS, 73.89 MiB/s [2024-11-20T11:48:06.914Z] 9472.00 IOPS, 74.00 MiB/s [2024-11-20T11:48:07.850Z] 9455.00 IOPS, 73.87 MiB/s [2024-11-20T11:48:08.785Z] 9460.67 IOPS, 73.91 MiB/s [2024-11-20T11:48:10.172Z] 9457.71 IOPS, 73.89 MiB/s [2024-11-20T11:48:11.107Z] 9459.88 IOPS, 73.91 MiB/s [2024-11-20T11:48:12.042Z] 9458.89 IOPS, 73.90 MiB/s [2024-11-20T11:48:12.042Z] 9465.30 IOPS, 73.95 MiB/s 00:34:06.278 Latency(us) 00:34:06.278 [2024-11-20T11:48:12.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.278 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:06.278 Verification LBA range: start 0x0 length 0x1000 00:34:06.278 Nvme1n1 : 10.01 9467.03 73.96 0.00 0.00 13481.93 1906.50 19422.49 00:34:06.278 [2024-11-20T11:48:12.042Z] =================================================================================================================== 00:34:06.278 [2024-11-20T11:48:12.042Z] Total : 9467.03 73.96 0.00 0.00 13481.93 1906.50 19422.49 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1168734 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.279 { 00:34:06.279 "params": { 00:34:06.279 "name": "Nvme$subsystem", 00:34:06.279 "trtype": "$TEST_TRANSPORT", 00:34:06.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.279 "adrfam": "ipv4", 00:34:06.279 "trsvcid": "$NVMF_PORT", 00:34:06.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.279 "hdgst": ${hdgst:-false}, 00:34:06.279 "ddgst": ${ddgst:-false} 00:34:06.279 }, 00:34:06.279 "method": "bdev_nvme_attach_controller" 00:34:06.279 } 00:34:06.279 EOF 00:34:06.279 )") 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:06.279 [2024-11-20 12:48:11.948273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:11.948307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:06.279 12:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:06.279 "params": { 00:34:06.279 "name": "Nvme1", 00:34:06.279 "trtype": "tcp", 00:34:06.279 "traddr": "10.0.0.2", 00:34:06.279 "adrfam": "ipv4", 00:34:06.279 "trsvcid": "4420", 00:34:06.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.279 "hdgst": false, 00:34:06.279 "ddgst": false 00:34:06.279 }, 00:34:06.279 "method": "bdev_nvme_attach_controller" 00:34:06.279 }' 00:34:06.279 [2024-11-20 12:48:11.960232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:11.960242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 [2024-11-20 12:48:11.972225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:11.972234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 [2024-11-20 12:48:11.984226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:11.984235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 [2024-11-20 12:48:11.984702] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:06.279 [2024-11-20 12:48:11.984741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168734 ] 00:34:06.279 [2024-11-20 12:48:11.996223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:11.996232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 [2024-11-20 12:48:12.008224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:12.008233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 [2024-11-20 12:48:12.020230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:12.020240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.279 [2024-11-20 12:48:12.032226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.279 [2024-11-20 12:48:12.032234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.044225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.044233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.056230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.056240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.058345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.538 [2024-11-20 12:48:12.068227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.068238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.080226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.080240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.092228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.092236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.097299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.538 [2024-11-20 12:48:12.104227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.104237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.116243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.116260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.128231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.128245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.140229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.140240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.152230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.152240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.164226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.164235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.176242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.176251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.188242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.188263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.200235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.200249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.212240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.212257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.224234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.224249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.236226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.236236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.248222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.248232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.260235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.260251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.272229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.272241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.284226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.284235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.538 [2024-11-20 12:48:12.296226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.538 [2024-11-20 12:48:12.296235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.308231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.308245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.320230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.320240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.332226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.332235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.344224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.344233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.356234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.356249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.398689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.398705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.408228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.408238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 Running I/O for 5 seconds... 00:34:06.796 [2024-11-20 12:48:12.423401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.423426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.437043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.437062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.451639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.451657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.465111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.465130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.476644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.476662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.796 [2024-11-20 12:48:12.489262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.796 [2024-11-20 12:48:12.489280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.797 [2024-11-20 12:48:12.503728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.797 [2024-11-20 12:48:12.503747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.797 [2024-11-20 12:48:12.516782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.797 [2024-11-20 12:48:12.516799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.797 [2024-11-20 12:48:12.531194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.797 [2024-11-20 12:48:12.531212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.797 [2024-11-20 12:48:12.544192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.797 [2024-11-20 12:48:12.544210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.797 [2024-11-20 12:48:12.557228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.797 [2024-11-20 12:48:12.557247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.055 [2024-11-20 12:48:12.571158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.571177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.584314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.584332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.597003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.597021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.611604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.611623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.624822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.624840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.637446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.637464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.651931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.651949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.665205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.665223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.679583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.679602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.692930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.692948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.704742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.704760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.717299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.717316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.731073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.731091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.744302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.744320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.756869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.756885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.768842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.768859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.783633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.783649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.796809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.796826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.056 [2024-11-20 12:48:12.809124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.056 [2024-11-20 12:48:12.809141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.820823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.820845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.833343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.833360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.847745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.847763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.860630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.860646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.872348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.872366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.885579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.885596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.899674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.899692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.912738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.912755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.927565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.927582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.940762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.940779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.955684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.955701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.968693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.968709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.981070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.981086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:12.993070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:12.993087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:13.005697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:13.005715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:13.019402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:13.019425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:13.032701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:13.032717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:13.045572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:13.045589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:13.059670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:13.059687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.316 [2024-11-20 12:48:13.072607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.316 [2024-11-20 12:48:13.072628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.085585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.085602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.099376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.099394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.112410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.112433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.125782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.125799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.139755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.139772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.152673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.152690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.165446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.165463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.176663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.176680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.189428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.189445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.204118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.204136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.217076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.217093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.231808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.231826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.245040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.245057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.260017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.260034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.273295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.273313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.287646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.287663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.300715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.300733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.313323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.313340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.576 [2024-11-20 12:48:13.327744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.576 [2024-11-20 12:48:13.327766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.341196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.341215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.355424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.355441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.368154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.368172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.380984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.381001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.395335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.395352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.408477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.408493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 18827.00 IOPS, 147.09 MiB/s [2024-11-20T11:48:13.599Z] [2024-11-20 12:48:13.421303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.421319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.436000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.436018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.449233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.449250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.463512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.463529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.476962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.476979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.491416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.835 [2024-11-20 12:48:13.491433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.835 [2024-11-20 12:48:13.504752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.504769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.836 [2024-11-20 12:48:13.516896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.516912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.836 [2024-11-20 12:48:13.529711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.529729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.836 [2024-11-20 12:48:13.543655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.543672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.836 [2024-11-20 12:48:13.556778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.556795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.836 [2024-11-20 12:48:13.571766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.571783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.836 [2024-11-20 12:48:13.584872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.836 [2024-11-20 12:48:13.584893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.599757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.599774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.612769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.612785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.625750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.625767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.639927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.639945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.653035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.653051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.667785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.667802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.680997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.681014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.692615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.692632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.705281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.705299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.717358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.717375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.729397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.729422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.744015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.744033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.757083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.757102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.771818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.771837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.785038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.785056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.799186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.799204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.811958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.811976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.825143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.825160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.839933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.839950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.095 [2024-11-20 12:48:13.853470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.095 [2024-11-20 12:48:13.853488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.867632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.867650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.880716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.880733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.892479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.892496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.905757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.905774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.919471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.919488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.932777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.932794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.945911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.945929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.959639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.959657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.354 [2024-11-20 12:48:13.972780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.354 [2024-11-20 12:48:13.972798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:13.985011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:13.985028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:13.997609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:13.997626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.007514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.007531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.020620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.020637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.033403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.033427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.047769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.047786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.060853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.060869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.075680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.075698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.088960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.088977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.100837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.100854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.355 [2024-11-20 12:48:14.113620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.355 [2024-11-20 12:48:14.113637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.613 [2024-11-20 12:48:14.127836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.613 [2024-11-20 12:48:14.127853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.613 [2024-11-20 12:48:14.141137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.613 [2024-11-20 12:48:14.141154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.613 [2024-11-20 12:48:14.152545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.613 [2024-11-20 12:48:14.152561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.613 [2024-11-20 12:48:14.165315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.613 [2024-11-20 12:48:14.165332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.613 [2024-11-20 12:48:14.179667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.613 [2024-11-20 12:48:14.179685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.192790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.192807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.207862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.207881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.221613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.221631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.232558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.232575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.245607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.245624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.259169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.259187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.272288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.272305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.284857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.284873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.297678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.297695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.311519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.311537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.324535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.324551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.336787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.336804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.349709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.349727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.614 [2024-11-20 12:48:14.363558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.614 [2024-11-20 12:48:14.363575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.376777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.376794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.389692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.389709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.400723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.400740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.413392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.413409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 18821.50 IOPS, 147.04 MiB/s [2024-11-20T11:48:14.636Z] [2024-11-20 12:48:14.427488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.427505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.440501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.440517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.453373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.453389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.463921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.463938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.476975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.476991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.488676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.488692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.501549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.501566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.515909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.872 [2024-11-20 12:48:14.515926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.872 [2024-11-20 12:48:14.528826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.528842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.541049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.541067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.553148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.553166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.567322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.567344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.580251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.580268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.592924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.592940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.604536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.604553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.617334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.617351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.873 [2024-11-20 12:48:14.629600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.873 [2024-11-20 12:48:14.629617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.643546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.643564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.656495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.656512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.669094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.669111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.681629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.681646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.695849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.695866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.709156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.709174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.723589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.723606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.736811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.736828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.751415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.751432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.764858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.764874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.779178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.779195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.792889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.792906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.807699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.807716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.821118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.821139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.835210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.131 [2024-11-20 12:48:14.835227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.131 [2024-11-20 12:48:14.848472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.132 [2024-11-20 12:48:14.848488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.132 [2024-11-20 12:48:14.860699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.132 [2024-11-20 12:48:14.860715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.132 [2024-11-20 12:48:14.873837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.132 [2024-11-20 12:48:14.873854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.132 [2024-11-20 12:48:14.887776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.132 [2024-11-20 12:48:14.887793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.900768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.900784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.915629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.915646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.928880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.928896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.943332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.943349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.956641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.956657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.968749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.968766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.981888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.981905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:14.996201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:14.996218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.009292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.009309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.024003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.024020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.037275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.037293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.051485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.051503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.064662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.064678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.076736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.076757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.088600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.088617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.101595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.101611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.115122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.115139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.128295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.128312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.390 [2024-11-20 12:48:15.141230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.390 [2024-11-20 12:48:15.141246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.153005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.153022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.165870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.165888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.179830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.179847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.192981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.192998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.207739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.207756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.221131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.221148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.235431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.235449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.248765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.248784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.260930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.260947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.275715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.275733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.289420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.289438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.303643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.303662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.316782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.648 [2024-11-20 12:48:15.316800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.648 [2024-11-20 12:48:15.331778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.649 [2024-11-20 12:48:15.331796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.649 [2024-11-20 12:48:15.344977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.649 [2024-11-20 12:48:15.344995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.649 [2024-11-20 12:48:15.359375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.649 [2024-11-20 12:48:15.359393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.649 [2024-11-20 12:48:15.372344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.649 [2024-11-20 12:48:15.372362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.649 [2024-11-20 12:48:15.385449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.649 [2024-11-20 12:48:15.385466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.649 [2024-11-20 12:48:15.399401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.649 [2024-11-20 12:48:15.399426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.412919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.412937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 18829.67 IOPS, 147.11 MiB/s [2024-11-20T11:48:15.671Z] [2024-11-20 12:48:15.427574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.427593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.440884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.440901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.453533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.453551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.467839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.467856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.481391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.481408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.492684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.492701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.504973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.504990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.519324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.519342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.532679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.532696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.544628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.544646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.557694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.557712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.568467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.568485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.581603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.581621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.595806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.595824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.608960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.608978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.623802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.623820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.637518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.637536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.651597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.651614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.907 [2024-11-20 12:48:15.664561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.907 [2024-11-20 12:48:15.664577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.676723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.676739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.689362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.689380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.703556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.703573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.716632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.716649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.728984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.729001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.743795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.743812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.757190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.757208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.768704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.768721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.781089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.781106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.793795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.793811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.804961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.804978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.817590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.817608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.828607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.828624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.841054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.841070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.853504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.853520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.868093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.868109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.881158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.881175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.893478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.893494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.907154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.907171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.166 [2024-11-20 12:48:15.920322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.166 [2024-11-20 12:48:15.920339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:15.932885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:15.932902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:15.945274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:15.945290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:15.956823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:15.956839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:15.969390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:15.969406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:15.983753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:15.983770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:15.996906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:15.996924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.009129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.009146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.021121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.021138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.033284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.033300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.045891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.045909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.059892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.059914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.073120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.073137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.085275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.085292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.099495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.099512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.112285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.112302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.125381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.125399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.139522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.139539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.152647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.152663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.167582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.167600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.425 [2024-11-20 12:48:16.180511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.425 [2024-11-20 12:48:16.180528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.192592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.192608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.205615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.205631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.216428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.216445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.229293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.229309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.243765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.243782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.257207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.257224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.271452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.271469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.284622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.284638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.299949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.299966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.313030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.313055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.324580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.324596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.337052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.337069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.349467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.349484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.363807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.363824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.377455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.377473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.391659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.391676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.404923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.404939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 [2024-11-20 12:48:16.419739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.419757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.684 18827.75 IOPS, 147.09 MiB/s [2024-11-20T11:48:16.448Z] [2024-11-20 12:48:16.432877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.684 [2024-11-20 12:48:16.432893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.447286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.447304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.460381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.460399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.473247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.473263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.487649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.487666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.501003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.501021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.513145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.513162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.525991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.526008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.539834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.539851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.552972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.552990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.567620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.567641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.580869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.580885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.593483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.593501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.604600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.604617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.617099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.617116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.629355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.629372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.643614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.643631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.657048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.657065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.668505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.668521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.681216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.681233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.943 [2024-11-20 12:48:16.696043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.943 [2024-11-20 12:48:16.696060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.708988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.709005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.721421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.721439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.732714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.732730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.745505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.745522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.759435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.759453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.772579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.772596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.785715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.785733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.799657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.799674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.812650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.812668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.827266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.201 [2024-11-20 12:48:16.827284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.201 [2024-11-20 12:48:16.840705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.840723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.853472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.853489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.867282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.867299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.880250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.880268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.893192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.893209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.907282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.907300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.920627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.920644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.932646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.932663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.945815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.945832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.202 [2024-11-20 12:48:16.960071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.202 [2024-11-20 12:48:16.960089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:16.972998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:16.973015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:16.985535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:16.985552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:16.999814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:16.999832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:17.013104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:17.013122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:17.027930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:17.027948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:17.041235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:17.041252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:17.056242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:17.056259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.460 [2024-11-20 12:48:17.069606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.460 [2024-11-20 12:48:17.069624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.083960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.083978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.097268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.097285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.109587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.109604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.123755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.123773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.137087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.137104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.151353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.151370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.164758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.164774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.179272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.179289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.192441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.192458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.205652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.205670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.461 [2024-11-20 12:48:17.219624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.461 [2024-11-20 12:48:17.219642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.232175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.232193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.245161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.245178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.259759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.259776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.272909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.272926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.284206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.284223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.297643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.297660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.311825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.311843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.325139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.325156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.339906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.339923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.353242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.353259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.368056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.368073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.381402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.381424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.395598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.395616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.409153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.409170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.423953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.423970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 18829.60 IOPS, 147.11 MiB/s 00:34:11.720 Latency(us) 00:34:11.720 [2024-11-20T11:48:17.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.720 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:11.720 Nvme1n1 : 5.01 18832.14 147.13 0.00 0.00 6791.50 1854.37 12213.53 00:34:11.720 [2024-11-20T11:48:17.484Z] =================================================================================================================== 00:34:11.720 [2024-11-20T11:48:17.484Z] Total : 18832.14 147.13 0.00 0.00 6791.50 1854.37 12213.53 00:34:11.720 [2024-11-20 12:48:17.432234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.432249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.444234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.444247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.456242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.456257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.468238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.468253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.720 [2024-11-20 12:48:17.480233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.720 [2024-11-20 12:48:17.480245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.979 [2024-11-20 12:48:17.492229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.979 [2024-11-20 12:48:17.492242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.979 [2024-11-20 12:48:17.504229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.979 [2024-11-20 12:48:17.504242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.979 [2024-11-20 12:48:17.516224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.979 [2024-11-20 12:48:17.516241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.979 [2024-11-20 12:48:17.528228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.979 [2024-11-20 12:48:17.528241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.980 [2024-11-20 12:48:17.540224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.980 [2024-11-20 12:48:17.540233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.980 [2024-11-20 12:48:17.552226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.980 [2024-11-20 12:48:17.552235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.980 [2024-11-20 12:48:17.564227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.980 [2024-11-20 12:48:17.564237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.980 [2024-11-20 12:48:17.576225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.980 [2024-11-20 12:48:17.576234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1168734) - No such process 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1168734 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.980 delay0 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.980 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:11.980 [2024-11-20 12:48:17.720333] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:18.681 Initializing NVMe Controllers 00:34:18.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:18.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:18.681 Initialization complete. Launching workers. 00:34:18.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4518 00:34:18.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4792, failed to submit 46 00:34:18.681 success 4660, unsuccessful 132, failed 0 00:34:18.681 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:18.681 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:18.681 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:18.681 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:18.682 rmmod nvme_tcp 00:34:18.682 rmmod nvme_fabrics 00:34:18.682 rmmod nvme_keyring 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1166376 ']' 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1166376 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1166376 ']' 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1166376 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:18.682 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1166376 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1166376' 00:34:18.940 killing process with pid 1166376 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1166376 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1166376 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.940 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:21.468 00:34:21.468 real 0m32.589s 00:34:21.468 user 0m42.542s 00:34:21.468 sys 0m11.577s 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.468 ************************************ 00:34:21.468 END TEST nvmf_zcopy 00:34:21.468 ************************************ 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:21.468 ************************************ 00:34:21.468 START TEST nvmf_nmic 00:34:21.468 ************************************ 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:21.468 * Looking for test storage... 00:34:21.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:21.468 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:21.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.469 --rc genhtml_branch_coverage=1 00:34:21.469 --rc genhtml_function_coverage=1 00:34:21.469 --rc genhtml_legend=1 00:34:21.469 --rc geninfo_all_blocks=1 00:34:21.469 --rc geninfo_unexecuted_blocks=1 00:34:21.469 00:34:21.469 ' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:21.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.469 --rc genhtml_branch_coverage=1 00:34:21.469 --rc genhtml_function_coverage=1 00:34:21.469 --rc genhtml_legend=1 00:34:21.469 --rc geninfo_all_blocks=1 00:34:21.469 --rc geninfo_unexecuted_blocks=1 00:34:21.469 00:34:21.469 ' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:21.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.469 --rc genhtml_branch_coverage=1 00:34:21.469 --rc genhtml_function_coverage=1 00:34:21.469 --rc genhtml_legend=1 00:34:21.469 --rc geninfo_all_blocks=1 00:34:21.469 --rc geninfo_unexecuted_blocks=1 00:34:21.469 00:34:21.469 ' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:21.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.469 --rc genhtml_branch_coverage=1 00:34:21.469 --rc genhtml_function_coverage=1 00:34:21.469 --rc genhtml_legend=1 00:34:21.469 --rc geninfo_all_blocks=1 00:34:21.469 --rc geninfo_unexecuted_blocks=1 00:34:21.469 00:34:21.469 ' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:21.469 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:21.469 12:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:34:28.041 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:34:28.041 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.041 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:34:28.042 Found net devices under 0000:1a:00.0: cvl_0_0 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:34:28.042 Found net devices under 0000:1a:00.1: cvl_0_1 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.042 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:34:28.042 00:34:28.042 --- 10.0.0.2 ping statistics --- 00:34:28.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.042 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:34:28.042 00:34:28.042 --- 10.0.0.1 ping statistics --- 00:34:28.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.042 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1174621 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1174621 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1174621 ']' 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.042 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.042 [2024-11-20 12:48:33.166510] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.042 [2024-11-20 12:48:33.167360] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:28.042 [2024-11-20 12:48:33.167393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.042 [2024-11-20 12:48:33.247546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:28.042 [2024-11-20 12:48:33.287730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.042 [2024-11-20 12:48:33.287780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.042 [2024-11-20 12:48:33.287786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.042 [2024-11-20 12:48:33.287792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.042 [2024-11-20 12:48:33.287796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.042 [2024-11-20 12:48:33.289254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.042 [2024-11-20 12:48:33.289274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:28.042 [2024-11-20 12:48:33.289366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.042 [2024-11-20 12:48:33.289367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:28.042 [2024-11-20 12:48:33.355851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.042 [2024-11-20 12:48:33.356875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:28.042 [2024-11-20 12:48:33.356894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.042 [2024-11-20 12:48:33.357218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:28.042 [2024-11-20 12:48:33.357273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:28.302 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.302 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:28.302 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:28.302 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.302 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.302 [2024-11-20 12:48:34.022200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.302 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 Malloc0 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 [2024-11-20 12:48:34.106447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:28.562 test case1: single bdev can't be used in multiple subsystems 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 [2024-11-20 12:48:34.137880] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:28.562 [2024-11-20 12:48:34.137899] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:28.562 [2024-11-20 12:48:34.137906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.562 request: 00:34:28.562 { 00:34:28.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:28.562 "namespace": { 00:34:28.562 "bdev_name": "Malloc0", 00:34:28.562 "no_auto_visible": false 00:34:28.562 }, 00:34:28.562 "method": "nvmf_subsystem_add_ns", 00:34:28.562 "req_id": 1 00:34:28.562 } 00:34:28.562 Got JSON-RPC error response 00:34:28.562 response: 00:34:28.562 { 00:34:28.562 "code": -32602, 00:34:28.562 "message": "Invalid parameters" 00:34:28.562 } 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:28.562 Adding namespace failed - expected result. 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:28.562 test case2: host connect to nvmf target in multiple paths 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:28.562 [2024-11-20 12:48:34.149973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.562 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:28.821 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:29.080 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:29.080 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:29.080 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:29.080 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:29.080 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:30.984 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:30.984 [global] 00:34:30.984 thread=1 00:34:30.984 invalidate=1 00:34:30.984 rw=write 00:34:30.984 time_based=1 00:34:30.984 runtime=1 00:34:30.984 ioengine=libaio 00:34:30.984 direct=1 00:34:30.984 bs=4096 00:34:30.984 iodepth=1 00:34:30.984 norandommap=0 00:34:30.984 numjobs=1 00:34:30.984 00:34:30.985 verify_dump=1 00:34:30.985 verify_backlog=512 00:34:30.985 verify_state_save=0 00:34:30.985 do_verify=1 00:34:30.985 verify=crc32c-intel 00:34:30.985 [job0] 00:34:30.985 filename=/dev/nvme0n1 00:34:30.985 Could not set queue depth (nvme0n1) 00:34:31.243 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:31.243 fio-3.35 00:34:31.243 Starting 1 thread 00:34:32.620 00:34:32.620 job0: (groupid=0, jobs=1): err= 0: pid=1175328: Wed Nov 20 12:48:38 2024 00:34:32.620 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:34:32.620 slat (nsec): min=6260, max=26531, avg=7101.05, stdev=717.77 00:34:32.620 clat (usec): min=157, max=258, avg=170.88, stdev= 8.74 00:34:32.620 lat (usec): min=165, max=265, avg=177.98, stdev= 8.77 00:34:32.620 clat percentiles (usec): 00:34:32.620 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 165], 20.00th=[ 167], 00:34:32.620 | 30.00th=[ 169], 40.00th=[ 169], 50.00th=[ 169], 60.00th=[ 172], 00:34:32.620 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 178], 00:34:32.620 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 251], 99.95th=[ 253], 00:34:32.620 | 99.99th=[ 260] 00:34:32.620 write: IOPS=3318, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:34:32.620 slat (nsec): min=8173, max=36578, avg=10186.07, stdev=1210.85 00:34:32.620 clat (usec): min=109, max=366, avg=122.34, stdev= 5.37 00:34:32.620 lat (usec): min=121, max=403, avg=132.53, stdev= 5.86 00:34:32.620 clat percentiles (usec): 00:34:32.620 | 1.00th=[ 117], 5.00th=[ 118], 10.00th=[ 119], 20.00th=[ 120], 00:34:32.620 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 123], 00:34:32.620 | 70.00th=[ 124], 80.00th=[ 125], 90.00th=[ 127], 95.00th=[ 129], 00:34:32.620 | 99.00th=[ 133], 99.50th=[ 135], 99.90th=[ 141], 99.95th=[ 145], 00:34:32.620 | 99.99th=[ 367] 00:34:32.620 bw ( KiB/s): min=13368, max=13368, per=100.00%, avg=13368.00, stdev= 0.00, samples=1 00:34:32.620 iops : min= 3342, max= 3342, avg=3342.00, stdev= 0.00, samples=1 00:34:32.620 lat (usec) : 250=99.83%, 500=0.17% 00:34:32.620 cpu : usr=3.70%, sys=5.10%, ctx=6394, majf=0, minf=1 00:34:32.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.620 issued rwts: total=3072,3322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:32.620 00:34:32.620 Run status group 0 (all jobs): 00:34:32.620 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:34:32.620 WRITE: bw=13.0MiB/s (13.6MB/s), 13.0MiB/s-13.0MiB/s (13.6MB/s-13.6MB/s), io=13.0MiB (13.6MB), run=1001-1001msec 00:34:32.620 00:34:32.620 Disk stats (read/write): 00:34:32.620 nvme0n1: ios=2745/3072, merge=0/0, ticks=453/360, in_queue=813, util=91.28% 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:32.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.620 rmmod nvme_tcp 00:34:32.620 rmmod nvme_fabrics 00:34:32.620 rmmod nvme_keyring 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1174621 ']' 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1174621 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1174621 ']' 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1174621 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.620 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1174621 00:34:32.879 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:32.879 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:32.879 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1174621' 00:34:32.879 killing process with pid 1174621 00:34:32.879 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1174621 00:34:32.879 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1174621 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.880 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.416 00:34:35.416 real 0m13.889s 00:34:35.416 user 0m27.124s 00:34:35.416 sys 0m6.296s 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:35.416 ************************************ 00:34:35.416 END TEST nvmf_nmic 00:34:35.416 ************************************ 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:35.416 ************************************ 00:34:35.416 START TEST nvmf_fio_target 00:34:35.416 ************************************ 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:35.416 * Looking for test storage... 00:34:35.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:35.416 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.417 --rc genhtml_branch_coverage=1 00:34:35.417 --rc genhtml_function_coverage=1 00:34:35.417 --rc genhtml_legend=1 00:34:35.417 --rc geninfo_all_blocks=1 00:34:35.417 --rc geninfo_unexecuted_blocks=1 00:34:35.417 00:34:35.417 ' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.417 --rc genhtml_branch_coverage=1 00:34:35.417 --rc genhtml_function_coverage=1 00:34:35.417 --rc genhtml_legend=1 00:34:35.417 --rc geninfo_all_blocks=1 00:34:35.417 --rc geninfo_unexecuted_blocks=1 00:34:35.417 00:34:35.417 ' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.417 --rc genhtml_branch_coverage=1 00:34:35.417 --rc genhtml_function_coverage=1 00:34:35.417 --rc genhtml_legend=1 00:34:35.417 --rc geninfo_all_blocks=1 00:34:35.417 --rc geninfo_unexecuted_blocks=1 00:34:35.417 00:34:35.417 ' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.417 --rc genhtml_branch_coverage=1 00:34:35.417 --rc genhtml_function_coverage=1 00:34:35.417 --rc genhtml_legend=1 00:34:35.417 --rc geninfo_all_blocks=1 00:34:35.417 --rc geninfo_unexecuted_blocks=1 00:34:35.417 00:34:35.417 ' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.417 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:41.988 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:34:41.989 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:34:41.989 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:34:41.989 Found net devices under 0000:1a:00.0: cvl_0_0 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:34:41.989 Found net devices under 0000:1a:00.1: cvl_0_1 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.989 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.990 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.990 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:34:41.990 00:34:41.990 --- 10.0.0.2 ping statistics --- 00:34:41.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.990 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:34:41.990 00:34:41.990 --- 10.0.0.1 ping statistics --- 00:34:41.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.990 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1179348 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1179348 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1179348 ']' 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.990 [2024-11-20 12:48:47.157583] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:41.990 [2024-11-20 12:48:47.158422] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:41.990 [2024-11-20 12:48:47.158452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.990 [2024-11-20 12:48:47.234966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:41.990 [2024-11-20 12:48:47.274149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.990 [2024-11-20 12:48:47.274183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.990 [2024-11-20 12:48:47.274190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.990 [2024-11-20 12:48:47.274196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.990 [2024-11-20 12:48:47.274200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.990 [2024-11-20 12:48:47.275651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.990 [2024-11-20 12:48:47.275765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.990 [2024-11-20 12:48:47.275876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.990 [2024-11-20 12:48:47.275878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:41.990 [2024-11-20 12:48:47.341097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.990 [2024-11-20 12:48:47.342024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:41.990 [2024-11-20 12:48:47.342462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:41.990 [2024-11-20 12:48:47.342771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:41.990 [2024-11-20 12:48:47.342800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:41.990 [2024-11-20 12:48:47.564535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.990 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.250 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:42.250 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.509 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:42.509 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.509 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:42.509 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:42.768 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:42.768 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:43.027 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.286 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:43.286 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.286 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:43.286 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:43.545 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:43.545 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:43.803 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:43.803 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:43.804 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.063 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:44.063 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:44.322 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.322 [2024-11-20 12:48:50.064456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.580 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:44.580 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:44.840 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:45.100 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:45.100 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:45.100 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:45.100 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:45.100 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:45.100 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:47.003 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:47.261 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:47.262 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:47.262 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:47.262 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:47.262 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:47.262 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:47.262 [global] 00:34:47.262 thread=1 00:34:47.262 invalidate=1 00:34:47.262 rw=write 00:34:47.262 time_based=1 00:34:47.262 runtime=1 00:34:47.262 ioengine=libaio 00:34:47.262 direct=1 00:34:47.262 bs=4096 00:34:47.262 iodepth=1 00:34:47.262 norandommap=0 00:34:47.262 numjobs=1 00:34:47.262 00:34:47.262 verify_dump=1 00:34:47.262 verify_backlog=512 00:34:47.262 verify_state_save=0 00:34:47.262 do_verify=1 00:34:47.262 verify=crc32c-intel 00:34:47.262 [job0] 00:34:47.262 filename=/dev/nvme0n1 00:34:47.262 [job1] 00:34:47.262 filename=/dev/nvme0n2 00:34:47.262 [job2] 00:34:47.262 filename=/dev/nvme0n3 00:34:47.262 [job3] 00:34:47.262 filename=/dev/nvme0n4 00:34:47.262 Could not set queue depth (nvme0n1) 00:34:47.262 Could not set queue depth (nvme0n2) 00:34:47.262 Could not set queue depth (nvme0n3) 00:34:47.262 Could not set queue depth (nvme0n4) 00:34:47.520 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.520 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.520 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.520 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.520 fio-3.35 00:34:47.520 Starting 4 threads 00:34:48.910 00:34:48.910 job0: (groupid=0, jobs=1): err= 0: pid=1180608: Wed Nov 20 12:48:54 2024 00:34:48.910 read: IOPS=2013, BW=8055KiB/s (8248kB/s)(8200KiB/1018msec) 00:34:48.910 slat (nsec): min=7262, max=36533, avg=8103.78, stdev=1121.60 00:34:48.910 clat (usec): min=177, max=40986, avg=275.62, stdev=1265.78 00:34:48.910 lat (usec): min=184, max=40995, avg=283.72, stdev=1265.83 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 210], 00:34:48.910 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:34:48.910 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:34:48.910 | 99.00th=[ 306], 99.50th=[ 359], 99.90th=[ 502], 99.95th=[40633], 00:34:48.910 | 99.99th=[41157] 00:34:48.910 write: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec); 0 zone resets 00:34:48.910 slat (nsec): min=9516, max=49366, avg=11318.36, stdev=1974.48 00:34:48.910 clat (usec): min=114, max=1463, avg=153.87, stdev=37.08 00:34:48.910 lat (usec): min=127, max=1475, avg=165.19, stdev=37.38 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:34:48.910 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 155], 00:34:48.910 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:34:48.910 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 371], 99.95th=[ 412], 00:34:48.910 | 99.99th=[ 1467] 00:34:48.910 bw ( KiB/s): min= 9088, max=11392, per=64.81%, avg=10240.00, stdev=1629.17, samples=2 00:34:48.910 iops : min= 2272, max= 2848, avg=2560.00, stdev=407.29, samples=2 00:34:48.910 lat (usec) : 250=91.48%, 500=8.42%, 750=0.04% 00:34:48.910 lat (msec) : 2=0.02%, 50=0.04% 00:34:48.910 cpu : usr=4.42%, sys=6.48%, ctx=4610, majf=0, minf=2 00:34:48.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.910 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.910 job1: (groupid=0, jobs=1): err= 0: pid=1180611: Wed Nov 20 12:48:54 2024 00:34:48.910 read: IOPS=23, BW=92.6KiB/s (94.8kB/s)(96.0KiB/1037msec) 00:34:48.910 slat (nsec): min=8798, max=24309, avg=21696.42, stdev=4040.93 00:34:48.910 clat (usec): min=230, max=41107, avg=39274.88, stdev=8316.58 00:34:48.910 lat (usec): min=240, max=41115, avg=39296.58, stdev=8319.07 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 231], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:48.910 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:48.910 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.910 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:48.910 | 99.99th=[41157] 00:34:48.910 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:34:48.910 slat (nsec): min=10439, max=49020, avg=12083.31, stdev=2755.10 00:34:48.910 clat (usec): min=144, max=307, avg=168.37, stdev=12.15 00:34:48.910 lat (usec): min=155, max=348, avg=180.45, stdev=13.36 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:34:48.910 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:34:48.910 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 188], 00:34:48.910 | 99.00th=[ 202], 99.50th=[ 227], 99.90th=[ 310], 99.95th=[ 310], 00:34:48.910 | 99.99th=[ 310] 00:34:48.910 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.910 lat (usec) : 250=95.52%, 500=0.19% 00:34:48.910 lat (msec) : 50=4.29% 00:34:48.910 cpu : usr=0.58%, sys=0.68%, ctx=536, majf=0, minf=2 00:34:48.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.910 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.910 job2: (groupid=0, jobs=1): err= 0: pid=1180613: Wed Nov 20 12:48:54 2024 00:34:48.910 read: IOPS=22, BW=90.6KiB/s (92.7kB/s)(92.0KiB/1016msec) 00:34:48.910 slat (nsec): min=9544, max=25943, avg=22977.87, stdev=4222.91 00:34:48.910 clat (usec): min=467, max=41434, avg=39226.43, stdev=8449.81 00:34:48.910 lat (usec): min=493, max=41444, avg=39249.41, stdev=8449.14 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 469], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:48.910 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:48.910 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.910 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:48.910 | 99.99th=[41681] 00:34:48.910 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:48.910 slat (nsec): min=11251, max=48141, avg=12729.91, stdev=2301.08 00:34:48.910 clat (usec): min=141, max=313, avg=203.86, stdev=34.00 00:34:48.910 lat (usec): min=153, max=351, avg=216.59, stdev=34.31 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 174], 00:34:48.910 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 212], 00:34:48.910 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 258], 00:34:48.910 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 314], 00:34:48.910 | 99.99th=[ 314] 00:34:48.910 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.910 lat (usec) : 250=88.04%, 500=7.85% 00:34:48.910 lat (msec) : 50=4.11% 00:34:48.910 cpu : usr=0.69%, sys=0.69%, ctx=536, majf=0, minf=1 00:34:48.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.910 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.910 job3: (groupid=0, jobs=1): err= 0: pid=1180614: Wed Nov 20 12:48:54 2024 00:34:48.910 read: IOPS=151, BW=607KiB/s (621kB/s)(612KiB/1009msec) 00:34:48.910 slat (nsec): min=7272, max=23862, avg=10088.23, stdev=5319.66 00:34:48.910 clat (usec): min=185, max=41001, avg=5835.18, stdev=14051.39 00:34:48.910 lat (usec): min=193, max=41025, avg=5845.27, stdev=14056.45 00:34:48.910 clat percentiles (usec): 00:34:48.910 | 1.00th=[ 194], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:34:48.910 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:34:48.910 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[41157], 95.00th=[41157], 00:34:48.910 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:48.910 | 99.99th=[41157] 00:34:48.910 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:48.910 slat (nsec): min=10121, max=36958, avg=11303.83, stdev=1983.08 00:34:48.910 clat (usec): min=125, max=447, avg=207.92, stdev=35.56 00:34:48.911 lat (usec): min=136, max=484, avg=219.23, stdev=36.13 00:34:48.911 clat percentiles (usec): 00:34:48.911 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 161], 20.00th=[ 178], 00:34:48.911 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 225], 00:34:48.911 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:34:48.911 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 449], 99.95th=[ 449], 00:34:48.911 | 99.99th=[ 449] 00:34:48.911 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.911 lat (usec) : 250=87.67%, 500=9.17% 00:34:48.911 lat (msec) : 50=3.16% 00:34:48.911 cpu : usr=0.20%, sys=0.79%, ctx=666, majf=0, minf=1 00:34:48.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.911 issued rwts: total=153,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.911 00:34:48.911 Run status group 0 (all jobs): 00:34:48.911 READ: bw=8679KiB/s (8887kB/s), 90.6KiB/s-8055KiB/s (92.7kB/s-8248kB/s), io=9000KiB (9216kB), run=1009-1037msec 00:34:48.911 WRITE: bw=15.4MiB/s (16.2MB/s), 1975KiB/s-9.82MiB/s (2022kB/s-10.3MB/s), io=16.0MiB (16.8MB), run=1009-1037msec 00:34:48.911 00:34:48.911 Disk stats (read/write): 00:34:48.911 nvme0n1: ios=2097/2048, merge=0/0, ticks=472/293, in_queue=765, util=86.77% 00:34:48.911 nvme0n2: ios=69/512, merge=0/0, ticks=803/73, in_queue=876, util=90.96% 00:34:48.911 nvme0n3: ios=41/512, merge=0/0, ticks=1641/98, in_queue=1739, util=93.65% 00:34:48.911 nvme0n4: ios=43/512, merge=0/0, ticks=1641/108, in_queue=1749, util=94.33% 00:34:48.911 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:48.911 [global] 00:34:48.911 thread=1 00:34:48.911 invalidate=1 00:34:48.911 rw=randwrite 00:34:48.911 time_based=1 00:34:48.911 runtime=1 00:34:48.911 ioengine=libaio 00:34:48.911 direct=1 00:34:48.911 bs=4096 00:34:48.911 iodepth=1 00:34:48.911 norandommap=0 00:34:48.911 numjobs=1 00:34:48.911 00:34:48.911 verify_dump=1 00:34:48.911 verify_backlog=512 00:34:48.911 verify_state_save=0 00:34:48.911 do_verify=1 00:34:48.911 verify=crc32c-intel 00:34:48.911 [job0] 00:34:48.911 filename=/dev/nvme0n1 00:34:48.911 [job1] 00:34:48.911 filename=/dev/nvme0n2 00:34:48.911 [job2] 00:34:48.911 filename=/dev/nvme0n3 00:34:48.911 [job3] 00:34:48.911 filename=/dev/nvme0n4 00:34:48.911 Could not set queue depth (nvme0n1) 00:34:48.911 Could not set queue depth (nvme0n2) 00:34:48.911 Could not set queue depth (nvme0n3) 00:34:48.911 Could not set queue depth (nvme0n4) 00:34:49.171 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.171 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.171 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.171 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:49.171 fio-3.35 00:34:49.171 Starting 4 threads 00:34:50.565 00:34:50.565 job0: (groupid=0, jobs=1): err= 0: pid=1181030: Wed Nov 20 12:48:55 2024 00:34:50.565 read: IOPS=2291, BW=9167KiB/s (9387kB/s)(9176KiB/1001msec) 00:34:50.565 slat (nsec): min=6790, max=20512, avg=7514.87, stdev=861.58 00:34:50.565 clat (usec): min=185, max=528, avg=229.18, stdev=21.92 00:34:50.565 lat (usec): min=192, max=536, avg=236.69, stdev=21.93 00:34:50.565 clat percentiles (usec): 00:34:50.565 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:34:50.565 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 239], 00:34:50.565 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:34:50.565 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 400], 99.95th=[ 420], 00:34:50.565 | 99.99th=[ 529] 00:34:50.565 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:50.565 slat (nsec): min=9567, max=40561, avg=10493.56, stdev=1365.15 00:34:50.565 clat (usec): min=127, max=498, avg=164.25, stdev=26.30 00:34:50.565 lat (usec): min=137, max=538, avg=174.74, stdev=26.48 00:34:50.565 clat percentiles (usec): 00:34:50.565 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:34:50.565 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:34:50.565 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 241], 00:34:50.565 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 285], 99.95th=[ 343], 00:34:50.565 | 99.99th=[ 498] 00:34:50.565 bw ( KiB/s): min=11208, max=11208, per=34.55%, avg=11208.00, stdev= 0.00, samples=1 00:34:50.565 iops : min= 2802, max= 2802, avg=2802.00, stdev= 0.00, samples=1 00:34:50.565 lat (usec) : 250=92.34%, 500=7.64%, 750=0.02% 00:34:50.565 cpu : usr=2.60%, sys=4.30%, ctx=4855, majf=0, minf=1 00:34:50.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.565 issued rwts: total=2294,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.565 job1: (groupid=0, jobs=1): err= 0: pid=1181031: Wed Nov 20 12:48:55 2024 00:34:50.565 read: IOPS=2395, BW=9582KiB/s (9812kB/s)(9592KiB/1001msec) 00:34:50.565 slat (nsec): min=6621, max=27862, avg=7535.74, stdev=967.86 00:34:50.565 clat (usec): min=191, max=474, avg=230.13, stdev=19.29 00:34:50.565 lat (usec): min=198, max=482, avg=237.67, stdev=19.45 00:34:50.565 clat percentiles (usec): 00:34:50.565 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:34:50.565 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:34:50.565 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 258], 00:34:50.565 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 412], 99.95th=[ 457], 00:34:50.565 | 99.99th=[ 474] 00:34:50.565 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:50.565 slat (nsec): min=9418, max=42067, avg=10515.37, stdev=1234.99 00:34:50.565 clat (usec): min=122, max=456, avg=153.47, stdev=13.44 00:34:50.565 lat (usec): min=132, max=498, avg=163.99, stdev=13.77 00:34:50.565 clat percentiles (usec): 00:34:50.565 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:34:50.565 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:34:50.565 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:34:50.565 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 208], 99.95th=[ 338], 00:34:50.565 | 99.99th=[ 457] 00:34:50.565 bw ( KiB/s): min=12288, max=12288, per=37.88%, avg=12288.00, stdev= 0.00, samples=1 00:34:50.565 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:50.565 lat (usec) : 250=93.18%, 500=6.82% 00:34:50.565 cpu : usr=2.40%, sys=4.60%, ctx=4961, majf=0, minf=1 00:34:50.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.565 issued rwts: total=2398,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.565 job2: (groupid=0, jobs=1): err= 0: pid=1181032: Wed Nov 20 12:48:55 2024 00:34:50.565 read: IOPS=2186, BW=8747KiB/s (8957kB/s)(8756KiB/1001msec) 00:34:50.565 slat (nsec): min=7899, max=42747, avg=9027.00, stdev=1588.68 00:34:50.565 clat (usec): min=184, max=445, avg=225.11, stdev=16.37 00:34:50.565 lat (usec): min=200, max=454, avg=234.14, stdev=16.46 00:34:50.565 clat percentiles (usec): 00:34:50.565 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:34:50.565 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 227], 00:34:50.565 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:34:50.565 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 302], 00:34:50.565 | 99.99th=[ 445] 00:34:50.565 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:50.565 slat (nsec): min=11718, max=39134, avg=13056.25, stdev=1714.92 00:34:50.565 clat (usec): min=138, max=715, avg=170.95, stdev=20.01 00:34:50.565 lat (usec): min=151, max=727, avg=184.00, stdev=20.34 00:34:50.565 clat percentiles (usec): 00:34:50.566 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:34:50.566 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:34:50.566 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:34:50.566 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 537], 99.95th=[ 553], 00:34:50.566 | 99.99th=[ 717] 00:34:50.566 bw ( KiB/s): min=10560, max=10560, per=32.55%, avg=10560.00, stdev= 0.00, samples=1 00:34:50.566 iops : min= 2640, max= 2640, avg=2640.00, stdev= 0.00, samples=1 00:34:50.566 lat (usec) : 250=96.93%, 500=3.01%, 750=0.06% 00:34:50.566 cpu : usr=4.20%, sys=8.20%, ctx=4750, majf=0, minf=2 00:34:50.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.566 issued rwts: total=2189,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.566 job3: (groupid=0, jobs=1): err= 0: pid=1181033: Wed Nov 20 12:48:55 2024 00:34:50.566 read: IOPS=26, BW=107KiB/s (109kB/s)(108KiB/1010msec) 00:34:50.566 slat (nsec): min=8138, max=27276, avg=21154.41, stdev=6722.84 00:34:50.566 clat (usec): min=204, max=41227, avg=33433.85, stdev=16116.24 00:34:50.566 lat (usec): min=213, max=41236, avg=33455.01, stdev=16120.67 00:34:50.566 clat percentiles (usec): 00:34:50.566 | 1.00th=[ 204], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[40633], 00:34:50.566 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:50.566 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:50.566 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:50.566 | 99.99th=[41157] 00:34:50.566 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:34:50.566 slat (nsec): min=10686, max=37728, avg=12285.93, stdev=1695.34 00:34:50.566 clat (usec): min=142, max=309, avg=191.53, stdev=26.28 00:34:50.566 lat (usec): min=155, max=321, avg=203.82, stdev=26.35 00:34:50.566 clat percentiles (usec): 00:34:50.566 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:34:50.566 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:34:50.566 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 233], 95.00th=[ 243], 00:34:50.566 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 310], 00:34:50.566 | 99.99th=[ 310] 00:34:50.566 bw ( KiB/s): min= 4096, max= 4096, per=12.63%, avg=4096.00, stdev= 0.00, samples=1 00:34:50.566 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:50.566 lat (usec) : 250=92.39%, 500=3.53% 00:34:50.566 lat (msec) : 50=4.08% 00:34:50.566 cpu : usr=0.79%, sys=0.59%, ctx=540, majf=0, minf=1 00:34:50.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.566 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:50.566 00:34:50.566 Run status group 0 (all jobs): 00:34:50.566 READ: bw=26.7MiB/s (28.0MB/s), 107KiB/s-9582KiB/s (109kB/s-9812kB/s), io=27.0MiB (28.3MB), run=1001-1010msec 00:34:50.566 WRITE: bw=31.7MiB/s (33.2MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1010msec 00:34:50.566 00:34:50.566 Disk stats (read/write): 00:34:50.566 nvme0n1: ios=1997/2048, merge=0/0, ticks=1419/329, in_queue=1748, util=96.79% 00:34:50.566 nvme0n2: ios=2083/2113, merge=0/0, ticks=621/307, in_queue=928, util=97.22% 00:34:50.566 nvme0n3: ios=1924/2048, merge=0/0, ticks=1356/327, in_queue=1683, util=96.50% 00:34:50.566 nvme0n4: ios=76/512, merge=0/0, ticks=1465/95, in_queue=1560, util=96.47% 00:34:50.566 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:50.566 [global] 00:34:50.566 thread=1 00:34:50.566 invalidate=1 00:34:50.566 rw=write 00:34:50.566 time_based=1 00:34:50.566 runtime=1 00:34:50.566 ioengine=libaio 00:34:50.566 direct=1 00:34:50.566 bs=4096 00:34:50.566 iodepth=128 00:34:50.566 norandommap=0 00:34:50.566 numjobs=1 00:34:50.566 00:34:50.566 verify_dump=1 00:34:50.566 verify_backlog=512 00:34:50.566 verify_state_save=0 00:34:50.566 do_verify=1 00:34:50.566 verify=crc32c-intel 00:34:50.566 [job0] 00:34:50.566 filename=/dev/nvme0n1 00:34:50.566 [job1] 00:34:50.566 filename=/dev/nvme0n2 00:34:50.566 [job2] 00:34:50.566 filename=/dev/nvme0n3 00:34:50.566 [job3] 00:34:50.566 filename=/dev/nvme0n4 00:34:50.566 Could not set queue depth (nvme0n1) 00:34:50.566 Could not set queue depth (nvme0n2) 00:34:50.566 Could not set queue depth (nvme0n3) 00:34:50.566 Could not set queue depth (nvme0n4) 00:34:50.826 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.826 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.826 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.826 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:50.826 fio-3.35 00:34:50.826 Starting 4 threads 00:34:52.198 00:34:52.198 job0: (groupid=0, jobs=1): err= 0: pid=1181450: Wed Nov 20 12:48:57 2024 00:34:52.198 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:34:52.198 slat (nsec): min=942, max=42784k, avg=102215.58, stdev=921813.82 00:34:52.198 clat (usec): min=1867, max=51661, avg=13876.93, stdev=9177.04 00:34:52.198 lat (usec): min=1878, max=51663, avg=13979.15, stdev=9217.59 00:34:52.198 clat percentiles (usec): 00:34:52.198 | 1.00th=[ 4359], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8979], 00:34:52.198 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10814], 60.00th=[11731], 00:34:52.198 | 70.00th=[13566], 80.00th=[16581], 90.00th=[19268], 95.00th=[39060], 00:34:52.198 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:34:52.198 | 99.99th=[51643] 00:34:52.198 write: IOPS=5344, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1003msec); 0 zone resets 00:34:52.198 slat (nsec): min=1684, max=9421.7k, avg=80890.74, stdev=525690.88 00:34:52.198 clat (usec): min=527, max=33262, avg=10483.18, stdev=3799.60 00:34:52.198 lat (usec): min=535, max=42683, avg=10564.07, stdev=3850.46 00:34:52.198 clat percentiles (usec): 00:34:52.198 | 1.00th=[ 1401], 5.00th=[ 5997], 10.00th=[ 7242], 20.00th=[ 7963], 00:34:52.198 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10290], 00:34:52.198 | 70.00th=[11207], 80.00th=[13435], 90.00th=[15008], 95.00th=[16909], 00:34:52.198 | 99.00th=[26608], 99.50th=[26608], 99.90th=[33162], 99.95th=[33162], 00:34:52.198 | 99.99th=[33162] 00:34:52.198 bw ( KiB/s): min=18064, max=23808, per=27.46%, avg=20936.00, stdev=4061.62, samples=2 00:34:52.198 iops : min= 4516, max= 5952, avg=5234.00, stdev=1015.41, samples=2 00:34:52.198 lat (usec) : 750=0.07%, 1000=0.13% 00:34:52.198 lat (msec) : 2=0.56%, 4=1.33%, 10=45.11%, 20=47.14%, 50=4.93% 00:34:52.198 lat (msec) : 100=0.73% 00:34:52.198 cpu : usr=2.79%, sys=4.59%, ctx=487, majf=0, minf=1 00:34:52.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:52.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:52.198 issued rwts: total=5120,5361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:52.198 job1: (groupid=0, jobs=1): err= 0: pid=1181451: Wed Nov 20 12:48:57 2024 00:34:52.198 read: IOPS=2646, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1004msec) 00:34:52.198 slat (nsec): min=933, max=28574k, avg=203871.19, stdev=1382094.95 00:34:52.198 clat (usec): min=1938, max=86707, avg=23537.37, stdev=16856.78 00:34:52.198 lat (usec): min=6181, max=86714, avg=23741.24, stdev=16961.80 00:34:52.198 clat percentiles (usec): 00:34:52.198 | 1.00th=[ 6194], 5.00th=[ 8291], 10.00th=[11600], 20.00th=[14615], 00:34:52.198 | 30.00th=[14877], 40.00th=[15401], 50.00th=[16712], 60.00th=[18220], 00:34:52.198 | 70.00th=[24249], 80.00th=[29492], 90.00th=[54789], 95.00th=[67634], 00:34:52.198 | 99.00th=[83362], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:34:52.198 | 99.99th=[86508] 00:34:52.198 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:34:52.198 slat (nsec): min=1701, max=18611k, avg=145132.46, stdev=979307.50 00:34:52.198 clat (usec): min=6029, max=83458, avg=20435.89, stdev=14402.64 00:34:52.198 lat (usec): min=6032, max=83471, avg=20581.02, stdev=14455.07 00:34:52.198 clat percentiles (usec): 00:34:52.198 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:34:52.198 | 30.00th=[10814], 40.00th=[11731], 50.00th=[14091], 60.00th=[18220], 00:34:52.198 | 70.00th=[23200], 80.00th=[27657], 90.00th=[41681], 95.00th=[50070], 00:34:52.198 | 99.00th=[78119], 99.50th=[78119], 99.90th=[83362], 99.95th=[83362], 00:34:52.198 | 99.99th=[83362] 00:34:52.198 bw ( KiB/s): min= 8712, max=15624, per=15.96%, avg=12168.00, stdev=4887.52, samples=2 00:34:52.198 iops : min= 2178, max= 3906, avg=3042.00, stdev=1221.88, samples=2 00:34:52.198 lat (msec) : 2=0.02%, 10=15.26%, 20=47.88%, 50=29.69%, 100=7.16% 00:34:52.198 cpu : usr=1.69%, sys=2.69%, ctx=285, majf=0, minf=1 00:34:52.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:34:52.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:52.198 issued rwts: total=2657,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:52.199 job2: (groupid=0, jobs=1): err= 0: pid=1181453: Wed Nov 20 12:48:57 2024 00:34:52.199 read: IOPS=5566, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1003msec) 00:34:52.199 slat (nsec): min=1315, max=12044k, avg=94615.81, stdev=767352.61 00:34:52.199 clat (usec): min=1576, max=24937, avg=12150.25, stdev=3267.21 00:34:52.199 lat (usec): min=2790, max=27389, avg=12244.87, stdev=3318.13 00:34:52.199 clat percentiles (usec): 00:34:52.199 | 1.00th=[ 5342], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10028], 00:34:52.199 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:34:52.199 | 70.00th=[12780], 80.00th=[14091], 90.00th=[17171], 95.00th=[19268], 00:34:52.199 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23725], 99.95th=[25035], 00:34:52.199 | 99.99th=[25035] 00:34:52.199 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:52.199 slat (nsec): min=1943, max=9325.7k, avg=78286.35, stdev=532291.24 00:34:52.199 clat (usec): min=1836, max=23987, avg=10545.38, stdev=2464.07 00:34:52.199 lat (usec): min=1840, max=23990, avg=10623.67, stdev=2508.62 00:34:52.199 clat percentiles (usec): 00:34:52.199 | 1.00th=[ 3130], 5.00th=[ 5997], 10.00th=[ 7177], 20.00th=[ 9110], 00:34:52.199 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:34:52.199 | 70.00th=[11600], 80.00th=[12125], 90.00th=[12780], 95.00th=[13698], 00:34:52.199 | 99.00th=[16909], 99.50th=[17957], 99.90th=[21627], 99.95th=[23987], 00:34:52.199 | 99.99th=[23987] 00:34:52.199 bw ( KiB/s): min=21144, max=23912, per=29.54%, avg=22528.00, stdev=1957.27, samples=2 00:34:52.199 iops : min= 5286, max= 5978, avg=5632.00, stdev=489.32, samples=2 00:34:52.199 lat (msec) : 2=0.06%, 4=1.03%, 10=23.04%, 20=73.87%, 50=1.99% 00:34:52.199 cpu : usr=4.79%, sys=6.19%, ctx=478, majf=0, minf=1 00:34:52.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:52.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:52.199 issued rwts: total=5583,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:52.199 job3: (groupid=0, jobs=1): err= 0: pid=1181454: Wed Nov 20 12:48:57 2024 00:34:52.199 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:34:52.199 slat (nsec): min=1004, max=12337k, avg=92164.09, stdev=662134.28 00:34:52.199 clat (usec): min=713, max=39358, avg=12165.70, stdev=4760.64 00:34:52.199 lat (usec): min=716, max=39364, avg=12257.87, stdev=4814.60 00:34:52.199 clat percentiles (usec): 00:34:52.199 | 1.00th=[ 1450], 5.00th=[ 6521], 10.00th=[ 8356], 20.00th=[ 9503], 00:34:52.199 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11469], 60.00th=[12256], 00:34:52.199 | 70.00th=[13304], 80.00th=[14615], 90.00th=[16909], 95.00th=[18482], 00:34:52.199 | 99.00th=[33162], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:34:52.199 | 99.99th=[39584] 00:34:52.199 write: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:34:52.199 slat (nsec): min=1689, max=9851.3k, avg=101070.95, stdev=584598.69 00:34:52.199 clat (usec): min=1238, max=39349, avg=14043.50, stdev=7834.63 00:34:52.199 lat (usec): min=1248, max=39353, avg=14144.57, stdev=7887.39 00:34:52.199 clat percentiles (usec): 00:34:52.199 | 1.00th=[ 3195], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 9241], 00:34:52.199 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:34:52.199 | 70.00th=[12780], 80.00th=[16712], 90.00th=[30016], 95.00th=[31589], 00:34:52.199 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[39060], 00:34:52.199 | 99.99th=[39584] 00:34:52.199 bw ( KiB/s): min=16384, max=23192, per=25.95%, avg=19788.00, stdev=4813.98, samples=2 00:34:52.199 iops : min= 4096, max= 5798, avg=4947.00, stdev=1203.50, samples=2 00:34:52.199 lat (usec) : 750=0.04% 00:34:52.199 lat (msec) : 2=0.71%, 4=1.52%, 10=24.42%, 20=62.34%, 50=10.97% 00:34:52.199 cpu : usr=2.99%, sys=4.69%, ctx=437, majf=0, minf=1 00:34:52.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:52.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:52.199 issued rwts: total=4608,5074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:52.199 00:34:52.199 Run status group 0 (all jobs): 00:34:52.199 READ: bw=69.9MiB/s (73.3MB/s), 10.3MiB/s-21.7MiB/s (10.8MB/s-22.8MB/s), io=70.2MiB (73.6MB), run=1003-1004msec 00:34:52.199 WRITE: bw=74.5MiB/s (78.1MB/s), 12.0MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=74.8MiB (78.4MB), run=1003-1004msec 00:34:52.199 00:34:52.199 Disk stats (read/write): 00:34:52.199 nvme0n1: ios=4471/4608, merge=0/0, ticks=24897/19660, in_queue=44557, util=94.59% 00:34:52.199 nvme0n2: ios=2463/2560, merge=0/0, ticks=17116/12541, in_queue=29657, util=97.14% 00:34:52.199 nvme0n3: ios=4646/4807, merge=0/0, ticks=55518/48988, in_queue=104506, util=97.91% 00:34:52.199 nvme0n4: ios=3747/4096, merge=0/0, ticks=41386/53527, in_queue=94913, util=100.00% 00:34:52.199 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:52.199 [global] 00:34:52.199 thread=1 00:34:52.199 invalidate=1 00:34:52.199 rw=randwrite 00:34:52.199 time_based=1 00:34:52.199 runtime=1 00:34:52.199 ioengine=libaio 00:34:52.199 direct=1 00:34:52.199 bs=4096 00:34:52.199 iodepth=128 00:34:52.199 norandommap=0 00:34:52.199 numjobs=1 00:34:52.199 00:34:52.199 verify_dump=1 00:34:52.199 verify_backlog=512 00:34:52.199 verify_state_save=0 00:34:52.199 do_verify=1 00:34:52.199 verify=crc32c-intel 00:34:52.199 [job0] 00:34:52.199 filename=/dev/nvme0n1 00:34:52.199 [job1] 00:34:52.199 filename=/dev/nvme0n2 00:34:52.199 [job2] 00:34:52.199 filename=/dev/nvme0n3 00:34:52.199 [job3] 00:34:52.199 filename=/dev/nvme0n4 00:34:52.199 Could not set queue depth (nvme0n1) 00:34:52.199 Could not set queue depth (nvme0n2) 00:34:52.199 Could not set queue depth (nvme0n3) 00:34:52.199 Could not set queue depth (nvme0n4) 00:34:52.457 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.457 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.457 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.457 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:52.457 fio-3.35 00:34:52.457 Starting 4 threads 00:34:53.829 00:34:53.829 job0: (groupid=0, jobs=1): err= 0: pid=1181871: Wed Nov 20 12:48:59 2024 00:34:53.829 read: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec) 00:34:53.829 slat (nsec): min=1217, max=11658k, avg=91311.53, stdev=565029.56 00:34:53.829 clat (usec): min=840, max=33845, avg=11649.38, stdev=4321.49 00:34:53.829 lat (usec): min=2873, max=33873, avg=11740.70, stdev=4345.35 00:34:53.829 clat percentiles (usec): 00:34:53.829 | 1.00th=[ 5407], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[ 8717], 00:34:53.829 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[11076], 00:34:53.829 | 70.00th=[13829], 80.00th=[15008], 90.00th=[17171], 95.00th=[19006], 00:34:53.829 | 99.00th=[26870], 99.50th=[31065], 99.90th=[32637], 99.95th=[32637], 00:34:53.829 | 99.99th=[33817] 00:34:53.829 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:34:53.829 slat (nsec): min=1976, max=7330.7k, avg=100299.40, stdev=549696.56 00:34:53.829 clat (usec): min=5078, max=55388, avg=13259.46, stdev=9268.50 00:34:53.829 lat (usec): min=5083, max=55393, avg=13359.76, stdev=9334.55 00:34:53.829 clat percentiles (usec): 00:34:53.829 | 1.00th=[ 6718], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 8979], 00:34:53.829 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:34:53.829 | 70.00th=[10552], 80.00th=[15270], 90.00th=[24511], 95.00th=[33162], 00:34:53.829 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:34:53.829 | 99.99th=[55313] 00:34:53.829 bw ( KiB/s): min=16384, max=24576, per=27.03%, avg=20480.00, stdev=5792.62, samples=2 00:34:53.829 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:34:53.829 lat (usec) : 1000=0.01% 00:34:53.829 lat (msec) : 4=0.44%, 10=56.66%, 20=34.17%, 50=7.82%, 100=0.90% 00:34:53.829 cpu : usr=3.99%, sys=4.29%, ctx=558, majf=0, minf=1 00:34:53.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:53.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.829 issued rwts: total=5060,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.829 job1: (groupid=0, jobs=1): err= 0: pid=1181872: Wed Nov 20 12:48:59 2024 00:34:53.829 read: IOPS=5437, BW=21.2MiB/s (22.3MB/s)(21.4MiB/1006msec) 00:34:53.829 slat (nsec): min=967, max=11870k, avg=77297.79, stdev=587746.03 00:34:53.829 clat (usec): min=1853, max=33608, avg=10310.51, stdev=4163.07 00:34:53.829 lat (usec): min=1862, max=33610, avg=10387.81, stdev=4202.78 00:34:53.829 clat percentiles (usec): 00:34:53.829 | 1.00th=[ 2638], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 7308], 00:34:53.829 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10159], 00:34:53.829 | 70.00th=[11207], 80.00th=[13304], 90.00th=[15533], 95.00th=[17695], 00:34:53.829 | 99.00th=[23725], 99.50th=[30540], 99.90th=[33162], 99.95th=[33817], 00:34:53.829 | 99.99th=[33817] 00:34:53.829 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:34:53.829 slat (nsec): min=1625, max=9588.3k, avg=89465.73, stdev=508925.44 00:34:53.829 clat (usec): min=316, max=55244, avg=12627.65, stdev=8235.70 00:34:53.829 lat (usec): min=583, max=55249, avg=12717.11, stdev=8287.36 00:34:53.829 clat percentiles (usec): 00:34:53.829 | 1.00th=[ 3916], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 8225], 00:34:53.829 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:34:53.829 | 70.00th=[11207], 80.00th=[14615], 90.00th=[25297], 95.00th=[33817], 00:34:53.829 | 99.00th=[41157], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:34:53.829 | 99.99th=[55313] 00:34:53.829 bw ( KiB/s): min=18816, max=26240, per=29.73%, avg=22528.00, stdev=5249.56, samples=2 00:34:53.829 iops : min= 4704, max= 6560, avg=5632.00, stdev=1312.39, samples=2 00:34:53.829 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:34:53.829 lat (msec) : 2=0.39%, 4=1.27%, 10=60.05%, 20=29.44%, 50=8.72% 00:34:53.829 lat (msec) : 100=0.11% 00:34:53.829 cpu : usr=2.29%, sys=5.17%, ctx=602, majf=0, minf=2 00:34:53.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:53.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.829 issued rwts: total=5470,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.829 job2: (groupid=0, jobs=1): err= 0: pid=1181873: Wed Nov 20 12:48:59 2024 00:34:53.829 read: IOPS=4670, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1005msec) 00:34:53.829 slat (nsec): min=938, max=10194k, avg=96576.19, stdev=677100.21 00:34:53.829 clat (usec): min=2977, max=30270, avg=12810.05, stdev=4235.77 00:34:53.829 lat (usec): min=2983, max=30871, avg=12906.63, stdev=4279.32 00:34:53.829 clat percentiles (usec): 00:34:53.829 | 1.00th=[ 4817], 5.00th=[ 6718], 10.00th=[ 8717], 20.00th=[ 9634], 00:34:53.829 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11863], 60.00th=[13042], 00:34:53.829 | 70.00th=[14222], 80.00th=[16057], 90.00th=[18744], 95.00th=[20579], 00:34:53.829 | 99.00th=[25035], 99.50th=[26346], 99.90th=[28181], 99.95th=[28181], 00:34:53.829 | 99.99th=[30278] 00:34:53.830 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:34:53.830 slat (nsec): min=1670, max=11372k, avg=101045.98, stdev=712652.20 00:34:53.830 clat (usec): min=4716, max=29997, avg=13082.35, stdev=3421.29 00:34:53.830 lat (usec): min=4725, max=30034, avg=13183.40, stdev=3497.47 00:34:53.830 clat percentiles (usec): 00:34:53.830 | 1.00th=[ 5538], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10421], 00:34:53.830 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12387], 60.00th=[13173], 00:34:53.830 | 70.00th=[14353], 80.00th=[16188], 90.00th=[17433], 95.00th=[20317], 00:34:53.830 | 99.00th=[22676], 99.50th=[23200], 99.90th=[25297], 99.95th=[28705], 00:34:53.830 | 99.99th=[30016] 00:34:53.830 bw ( KiB/s): min=20152, max=20480, per=26.81%, avg=20316.00, stdev=231.93, samples=2 00:34:53.830 iops : min= 5038, max= 5120, avg=5079.00, stdev=57.98, samples=2 00:34:53.830 lat (msec) : 4=0.25%, 10=17.79%, 20=75.07%, 50=6.89% 00:34:53.830 cpu : usr=4.28%, sys=5.38%, ctx=288, majf=0, minf=2 00:34:53.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.830 issued rwts: total=4694,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.830 job3: (groupid=0, jobs=1): err= 0: pid=1181874: Wed Nov 20 12:48:59 2024 00:34:53.830 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:34:53.830 slat (nsec): min=1316, max=10105k, avg=153218.16, stdev=908655.84 00:34:53.830 clat (usec): min=8791, max=44171, avg=20130.93, stdev=6741.72 00:34:53.830 lat (usec): min=8807, max=44197, avg=20284.15, stdev=6825.45 00:34:53.830 clat percentiles (usec): 00:34:53.830 | 1.00th=[11076], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:34:53.830 | 30.00th=[14615], 40.00th=[16450], 50.00th=[18220], 60.00th=[20579], 00:34:53.830 | 70.00th=[23725], 80.00th=[25822], 90.00th=[30802], 95.00th=[33817], 00:34:53.830 | 99.00th=[35390], 99.50th=[38011], 99.90th=[40109], 99.95th=[43779], 00:34:53.830 | 99.99th=[44303] 00:34:53.830 write: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1005msec); 0 zone resets 00:34:53.830 slat (usec): min=2, max=17268, avg=159.47, stdev=915.26 00:34:53.830 clat (usec): min=4613, max=44736, avg=20368.81, stdev=6482.56 00:34:53.830 lat (usec): min=9529, max=44752, avg=20528.28, stdev=6551.42 00:34:53.830 clat percentiles (usec): 00:34:53.830 | 1.00th=[10028], 5.00th=[12387], 10.00th=[13042], 20.00th=[13960], 00:34:53.830 | 30.00th=[14746], 40.00th=[17433], 50.00th=[19006], 60.00th=[22938], 00:34:53.830 | 70.00th=[24773], 80.00th=[25822], 90.00th=[29230], 95.00th=[30278], 00:34:53.830 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[44303], 00:34:53.830 | 99.99th=[44827] 00:34:53.830 bw ( KiB/s): min=12296, max=12336, per=16.25%, avg=12316.00, stdev=28.28, samples=2 00:34:53.830 iops : min= 3074, max= 3084, avg=3079.00, stdev= 7.07, samples=2 00:34:53.830 lat (msec) : 10=0.82%, 20=54.85%, 50=44.33% 00:34:53.830 cpu : usr=2.19%, sys=4.68%, ctx=272, majf=0, minf=1 00:34:53.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:53.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:53.830 issued rwts: total=3072,3185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:53.830 00:34:53.830 Run status group 0 (all jobs): 00:34:53.830 READ: bw=71.0MiB/s (74.5MB/s), 11.9MiB/s-21.2MiB/s (12.5MB/s-22.3MB/s), io=71.5MiB (74.9MB), run=1003-1006msec 00:34:53.830 WRITE: bw=74.0MiB/s (77.6MB/s), 12.4MiB/s-21.9MiB/s (13.0MB/s-22.9MB/s), io=74.4MiB (78.1MB), run=1003-1006msec 00:34:53.830 00:34:53.830 Disk stats (read/write): 00:34:53.830 nvme0n1: ios=3651/4096, merge=0/0, ticks=14987/17644, in_queue=32631, util=98.00% 00:34:53.830 nvme0n2: ios=4146/4430, merge=0/0, ticks=27072/39978, in_queue=67050, util=91.26% 00:34:53.830 nvme0n3: ios=4051/4096, merge=0/0, ticks=25077/24708, in_queue=49785, util=87.81% 00:34:53.830 nvme0n4: ios=2580/2567, merge=0/0, ticks=18877/22188, in_queue=41065, util=96.57% 00:34:53.830 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:53.830 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1182137 00:34:53.830 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:53.830 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:53.830 [global] 00:34:53.830 thread=1 00:34:53.830 invalidate=1 00:34:53.830 rw=read 00:34:53.830 time_based=1 00:34:53.830 runtime=10 00:34:53.830 ioengine=libaio 00:34:53.830 direct=1 00:34:53.830 bs=4096 00:34:53.830 iodepth=1 00:34:53.830 norandommap=1 00:34:53.830 numjobs=1 00:34:53.830 00:34:53.830 [job0] 00:34:53.830 filename=/dev/nvme0n1 00:34:53.830 [job1] 00:34:53.830 filename=/dev/nvme0n2 00:34:53.830 [job2] 00:34:53.830 filename=/dev/nvme0n3 00:34:53.830 [job3] 00:34:53.830 filename=/dev/nvme0n4 00:34:53.830 Could not set queue depth (nvme0n1) 00:34:53.830 Could not set queue depth (nvme0n2) 00:34:53.830 Could not set queue depth (nvme0n3) 00:34:53.830 Could not set queue depth (nvme0n4) 00:34:54.087 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:54.087 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:54.087 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:54.087 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:54.087 fio-3.35 00:34:54.087 Starting 4 threads 00:34:56.614 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:56.872 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1982464, buflen=4096 00:34:56.872 fio: pid=1182296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:56.872 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:56.872 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:56.872 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:56.872 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16265216, buflen=4096 00:34:56.872 fio: pid=1182295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:57.129 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=303104, buflen=4096 00:34:57.130 fio: pid=1182292, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:57.130 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.130 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:57.388 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54947840, buflen=4096 00:34:57.388 fio: pid=1182293, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:57.388 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.388 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:57.388 00:34:57.388 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182292: Wed Nov 20 12:49:03 2024 00:34:57.388 read: IOPS=24, BW=97.2KiB/s (99.6kB/s)(296KiB/3044msec) 00:34:57.388 slat (usec): min=11, max=10726, avg=283.91, stdev=1588.97 00:34:57.388 clat (usec): min=470, max=41154, avg=40421.77, stdev=4708.31 00:34:57.388 lat (usec): min=505, max=51880, avg=40709.21, stdev=5008.58 00:34:57.388 clat percentiles (usec): 00:34:57.388 | 1.00th=[ 469], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:57.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:57.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:57.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:57.388 | 99.99th=[41157] 00:34:57.388 bw ( KiB/s): min= 96, max= 104, per=0.45%, avg=99.20, stdev= 4.38, samples=5 00:34:57.388 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:34:57.388 lat (usec) : 500=1.33% 00:34:57.388 lat (msec) : 50=97.33% 00:34:57.388 cpu : usr=0.13%, sys=0.00%, ctx=77, majf=0, minf=1 00:34:57.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:57.388 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182293: Wed Nov 20 12:49:03 2024 00:34:57.388 read: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(52.4MiB/3235msec) 00:34:57.388 slat (usec): min=6, max=26575, avg=12.82, stdev=291.14 00:34:57.388 clat (usec): min=148, max=42196, avg=225.63, stdev=1440.63 00:34:57.388 lat (usec): min=155, max=42203, avg=237.57, stdev=1467.07 00:34:57.388 clat percentiles (usec): 00:34:57.388 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 161], 00:34:57.388 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:34:57.388 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 206], 00:34:57.388 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[41157], 99.95th=[41681], 00:34:57.388 | 99.99th=[42206] 00:34:57.388 bw ( KiB/s): min= 4992, max=23224, per=74.72%, avg=16579.17, stdev=8347.37, samples=6 00:34:57.388 iops : min= 1248, max= 5806, avg=4144.67, stdev=2086.79, samples=6 00:34:57.388 lat (usec) : 250=99.11%, 500=0.72%, 750=0.01%, 1000=0.01% 00:34:57.388 lat (msec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% 00:34:57.388 cpu : usr=0.99%, sys=3.74%, ctx=13424, majf=0, minf=2 00:34:57.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 issued rwts: total=13416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:57.388 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182295: Wed Nov 20 12:49:03 2024 00:34:57.388 read: IOPS=1375, BW=5502KiB/s (5634kB/s)(15.5MiB/2887msec) 00:34:57.388 slat (nsec): min=6823, max=47644, avg=8074.10, stdev=2330.26 00:34:57.388 clat (usec): min=150, max=41325, avg=712.33, stdev=4501.19 00:34:57.388 lat (usec): min=165, max=41349, avg=720.40, stdev=4502.90 00:34:57.388 clat percentiles (usec): 00:34:57.388 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:34:57.388 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 245], 00:34:57.388 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:34:57.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:57.388 | 99.99th=[41157] 00:34:57.388 bw ( KiB/s): min= 96, max=17256, per=28.57%, avg=6339.20, stdev=8615.74, samples=5 00:34:57.388 iops : min= 24, max= 4314, avg=1584.80, stdev=2153.94, samples=5 00:34:57.388 lat (usec) : 250=77.11%, 500=21.60%, 750=0.03% 00:34:57.388 lat (msec) : 50=1.23% 00:34:57.388 cpu : usr=0.14%, sys=1.52%, ctx=3975, majf=0, minf=1 00:34:57.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 issued rwts: total=3972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:57.388 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182296: Wed Nov 20 12:49:03 2024 00:34:57.388 read: IOPS=181, BW=725KiB/s (742kB/s)(1936KiB/2671msec) 00:34:57.388 slat (nsec): min=6776, max=31491, avg=9694.52, stdev=5481.20 00:34:57.388 clat (usec): min=190, max=41449, avg=5461.83, stdev=13592.37 00:34:57.388 lat (usec): min=197, max=41457, avg=5471.50, stdev=13595.86 00:34:57.388 clat percentiles (usec): 00:34:57.388 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 239], 00:34:57.388 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:34:57.388 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[40633], 95.00th=[41157], 00:34:57.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:57.388 | 99.99th=[41681] 00:34:57.388 bw ( KiB/s): min= 104, max= 3256, per=3.45%, avg=766.40, stdev=1391.97, samples=5 00:34:57.388 iops : min= 26, max= 814, avg=191.60, stdev=347.99, samples=5 00:34:57.388 lat (usec) : 250=21.65%, 500=65.15%, 750=0.21% 00:34:57.388 lat (msec) : 50=12.78% 00:34:57.388 cpu : usr=0.04%, sys=0.22%, ctx=485, majf=0, minf=2 00:34:57.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.388 issued rwts: total=485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:57.388 00:34:57.388 Run status group 0 (all jobs): 00:34:57.388 READ: bw=21.7MiB/s (22.7MB/s), 97.2KiB/s-16.2MiB/s (99.6kB/s-17.0MB/s), io=70.1MiB (73.5MB), run=2671-3235msec 00:34:57.388 00:34:57.388 Disk stats (read/write): 00:34:57.388 nvme0n1: ios=70/0, merge=0/0, ticks=2829/0, in_queue=2829, util=95.03% 00:34:57.388 nvme0n2: ios=12926/0, merge=0/0, ticks=2856/0, in_queue=2856, util=94.56% 00:34:57.389 nvme0n3: ios=4002/0, merge=0/0, ticks=3355/0, in_queue=3355, util=99.43% 00:34:57.389 nvme0n4: ios=482/0, merge=0/0, ticks=2562/0, in_queue=2562, util=96.41% 00:34:57.647 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.647 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:57.647 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.647 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:57.905 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:57.905 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:58.163 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:58.163 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:58.422 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:58.422 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1182137 00:34:58.422 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:58.422 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:58.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:58.422 nvmf hotplug test: fio failed as expected 00:34:58.422 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:58.681 rmmod nvme_tcp 00:34:58.681 rmmod nvme_fabrics 00:34:58.681 rmmod nvme_keyring 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1179348 ']' 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1179348 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1179348 ']' 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1179348 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179348 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179348' 00:34:58.681 killing process with pid 1179348 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1179348 00:34:58.681 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1179348 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.941 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.847 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:01.107 00:35:01.107 real 0m25.854s 00:35:01.107 user 1m44.276s 00:35:01.108 sys 0m11.141s 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:01.108 ************************************ 00:35:01.108 END TEST nvmf_fio_target 00:35:01.108 ************************************ 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:01.108 ************************************ 00:35:01.108 START TEST nvmf_bdevio 00:35:01.108 ************************************ 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:01.108 * Looking for test storage... 00:35:01.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.108 --rc genhtml_branch_coverage=1 00:35:01.108 --rc genhtml_function_coverage=1 00:35:01.108 --rc genhtml_legend=1 00:35:01.108 --rc geninfo_all_blocks=1 00:35:01.108 --rc geninfo_unexecuted_blocks=1 00:35:01.108 00:35:01.108 ' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.108 --rc genhtml_branch_coverage=1 00:35:01.108 --rc genhtml_function_coverage=1 00:35:01.108 --rc genhtml_legend=1 00:35:01.108 --rc geninfo_all_blocks=1 00:35:01.108 --rc geninfo_unexecuted_blocks=1 00:35:01.108 00:35:01.108 ' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.108 --rc genhtml_branch_coverage=1 00:35:01.108 --rc genhtml_function_coverage=1 00:35:01.108 --rc genhtml_legend=1 00:35:01.108 --rc geninfo_all_blocks=1 00:35:01.108 --rc geninfo_unexecuted_blocks=1 00:35:01.108 00:35:01.108 ' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.108 --rc genhtml_branch_coverage=1 00:35:01.108 --rc genhtml_function_coverage=1 00:35:01.108 --rc genhtml_legend=1 00:35:01.108 --rc geninfo_all_blocks=1 00:35:01.108 --rc geninfo_unexecuted_blocks=1 00:35:01.108 00:35:01.108 ' 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.108 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.368 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:01.369 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:35:07.940 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:35:07.940 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:35:07.941 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:35:07.941 Found net devices under 0000:1a:00.0: cvl_0_0 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:35:07.941 Found net devices under 0000:1a:00.1: cvl_0_1 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:35:07.941 00:35:07.941 --- 10.0.0.2 ping statistics --- 00:35:07.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.941 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:35:07.941 00:35:07.941 --- 10.0.0.1 ping statistics --- 00:35:07.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.941 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:07.941 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1186786 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1186786 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1186786 ']' 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.941 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.941 [2024-11-20 12:49:13.080513] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:07.941 [2024-11-20 12:49:13.081386] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:35:07.941 [2024-11-20 12:49:13.081425] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.941 [2024-11-20 12:49:13.159734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:07.941 [2024-11-20 12:49:13.199330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:07.941 [2024-11-20 12:49:13.199363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:07.941 [2024-11-20 12:49:13.199370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:07.942 [2024-11-20 12:49:13.199375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:07.942 [2024-11-20 12:49:13.199380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:07.942 [2024-11-20 12:49:13.200968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:07.942 [2024-11-20 12:49:13.201079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:07.942 [2024-11-20 12:49:13.201191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:07.942 [2024-11-20 12:49:13.201192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:07.942 [2024-11-20 12:49:13.264987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:07.942 [2024-11-20 12:49:13.265894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:07.942 [2024-11-20 12:49:13.265916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:07.942 [2024-11-20 12:49:13.266518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:07.942 [2024-11-20 12:49:13.266550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.201 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.201 [2024-11-20 12:49:13.937928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.459 Malloc0 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.459 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.459 [2024-11-20 12:49:14.018126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:08.459 { 00:35:08.459 "params": { 00:35:08.459 "name": "Nvme$subsystem", 00:35:08.459 "trtype": "$TEST_TRANSPORT", 00:35:08.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.459 "adrfam": "ipv4", 00:35:08.459 "trsvcid": "$NVMF_PORT", 00:35:08.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.459 "hdgst": ${hdgst:-false}, 00:35:08.459 "ddgst": ${ddgst:-false} 00:35:08.459 }, 00:35:08.459 "method": "bdev_nvme_attach_controller" 00:35:08.459 } 00:35:08.459 EOF 00:35:08.459 )") 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:08.459 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:08.460 "params": { 00:35:08.460 "name": "Nvme1", 00:35:08.460 "trtype": "tcp", 00:35:08.460 "traddr": "10.0.0.2", 00:35:08.460 "adrfam": "ipv4", 00:35:08.460 "trsvcid": "4420", 00:35:08.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:08.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:08.460 "hdgst": false, 00:35:08.460 "ddgst": false 00:35:08.460 }, 00:35:08.460 "method": "bdev_nvme_attach_controller" 00:35:08.460 }' 00:35:08.460 [2024-11-20 12:49:14.068946] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:35:08.460 [2024-11-20 12:49:14.068990] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186901 ] 00:35:08.460 [2024-11-20 12:49:14.142681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:08.460 [2024-11-20 12:49:14.183076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.460 [2024-11-20 12:49:14.183190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.460 [2024-11-20 12:49:14.183190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.023 I/O targets: 00:35:09.024 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:09.024 00:35:09.024 00:35:09.024 CUnit - A unit testing framework for C - Version 2.1-3 00:35:09.024 http://cunit.sourceforge.net/ 00:35:09.024 00:35:09.024 00:35:09.024 Suite: bdevio tests on: Nvme1n1 00:35:09.024 Test: blockdev write read block ...passed 00:35:09.024 Test: blockdev write zeroes read block ...passed 00:35:09.024 Test: blockdev write zeroes read no split ...passed 00:35:09.024 Test: blockdev write zeroes read split ...passed 00:35:09.024 Test: blockdev write zeroes read split partial ...passed 00:35:09.024 Test: blockdev reset ...[2024-11-20 12:49:14.688969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:09.024 [2024-11-20 12:49:14.689032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8af780 (9): Bad file descriptor 00:35:09.024 [2024-11-20 12:49:14.692173] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:09.024 passed 00:35:09.024 Test: blockdev write read 8 blocks ...passed 00:35:09.024 Test: blockdev write read size > 128k ...passed 00:35:09.024 Test: blockdev write read invalid size ...passed 00:35:09.024 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:09.024 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:09.024 Test: blockdev write read max offset ...passed 00:35:09.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:09.280 Test: blockdev writev readv 8 blocks ...passed 00:35:09.280 Test: blockdev writev readv 30 x 1block ...passed 00:35:09.280 Test: blockdev writev readv block ...passed 00:35:09.280 Test: blockdev writev readv size > 128k ...passed 00:35:09.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:09.280 Test: blockdev comparev and writev ...[2024-11-20 12:49:14.948408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.948438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.948451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.948731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.948741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.948751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.948758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.949016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.949026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.949036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.949042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.949302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.949312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:09.280 [2024-11-20 12:49:14.949322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:09.280 [2024-11-20 12:49:14.949330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:09.280 passed 00:35:09.281 Test: blockdev nvme passthru rw ...passed 00:35:09.281 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:49:15.032788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:09.281 [2024-11-20 12:49:15.032803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:09.281 [2024-11-20 12:49:15.032913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:09.281 [2024-11-20 12:49:15.032923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:09.281 [2024-11-20 12:49:15.033038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:09.281 [2024-11-20 12:49:15.033047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:09.281 [2024-11-20 12:49:15.033163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:09.281 [2024-11-20 12:49:15.033172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:09.281 passed 00:35:09.538 Test: blockdev nvme admin passthru ...passed 00:35:09.538 Test: blockdev copy ...passed 00:35:09.538 00:35:09.538 Run Summary: Type Total Ran Passed Failed Inactive 00:35:09.538 suites 1 1 n/a 0 0 00:35:09.538 tests 23 23 23 0 0 00:35:09.538 asserts 152 152 152 0 n/a 00:35:09.538 00:35:09.538 Elapsed time = 1.196 seconds 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:09.538 rmmod nvme_tcp 00:35:09.538 rmmod nvme_fabrics 00:35:09.538 rmmod nvme_keyring 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:09.538 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:09.539 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:09.539 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1186786 ']' 00:35:09.539 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1186786 00:35:09.539 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1186786 ']' 00:35:09.539 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1186786 00:35:09.539 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:09.797 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1186786 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1186786' 00:35:09.798 killing process with pid 1186786 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1186786 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1186786 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.798 12:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.333 12:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:12.333 00:35:12.333 real 0m10.917s 00:35:12.333 user 0m9.861s 00:35:12.333 sys 0m5.428s 00:35:12.333 12:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.333 12:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:12.333 ************************************ 00:35:12.333 END TEST nvmf_bdevio 00:35:12.333 ************************************ 00:35:12.333 12:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:12.333 00:35:12.333 real 4m40.426s 00:35:12.333 user 9m26.038s 00:35:12.333 sys 1m49.684s 00:35:12.333 12:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.333 12:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:12.333 ************************************ 00:35:12.333 END TEST nvmf_target_core_interrupt_mode 00:35:12.333 ************************************ 00:35:12.333 12:49:17 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:12.333 12:49:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:12.333 12:49:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:12.333 12:49:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.333 ************************************ 00:35:12.333 START TEST nvmf_interrupt 00:35:12.333 ************************************ 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:12.333 * Looking for test storage... 00:35:12.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:12.333 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:12.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.334 --rc genhtml_branch_coverage=1 00:35:12.334 --rc genhtml_function_coverage=1 00:35:12.334 --rc genhtml_legend=1 00:35:12.334 --rc geninfo_all_blocks=1 00:35:12.334 --rc geninfo_unexecuted_blocks=1 00:35:12.334 00:35:12.334 ' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:12.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.334 --rc genhtml_branch_coverage=1 00:35:12.334 --rc genhtml_function_coverage=1 00:35:12.334 --rc genhtml_legend=1 00:35:12.334 --rc geninfo_all_blocks=1 00:35:12.334 --rc geninfo_unexecuted_blocks=1 00:35:12.334 00:35:12.334 ' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:12.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.334 --rc genhtml_branch_coverage=1 00:35:12.334 --rc genhtml_function_coverage=1 00:35:12.334 --rc genhtml_legend=1 00:35:12.334 --rc geninfo_all_blocks=1 00:35:12.334 --rc geninfo_unexecuted_blocks=1 00:35:12.334 00:35:12.334 ' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:12.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.334 --rc genhtml_branch_coverage=1 00:35:12.334 --rc genhtml_function_coverage=1 00:35:12.334 --rc genhtml_legend=1 00:35:12.334 --rc geninfo_all_blocks=1 00:35:12.334 --rc geninfo_unexecuted_blocks=1 00:35:12.334 00:35:12.334 ' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:12.334 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:12.335 12:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:35:19.017 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:35:19.017 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:35:19.017 Found net devices under 0000:1a:00.0: cvl_0_0 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:35:19.017 Found net devices under 0000:1a:00.1: cvl_0_1 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.017 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:35:19.018 00:35:19.018 --- 10.0.0.2 ping statistics --- 00:35:19.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.018 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:35:19.018 00:35:19.018 --- 10.0.0.1 ping statistics --- 00:35:19.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.018 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.018 12:49:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1190895 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1190895 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1190895 ']' 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 [2024-11-20 12:49:24.060624] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:19.018 [2024-11-20 12:49:24.061546] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:35:19.018 [2024-11-20 12:49:24.061582] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.018 [2024-11-20 12:49:24.138151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:19.018 [2024-11-20 12:49:24.176218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.018 [2024-11-20 12:49:24.176252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.018 [2024-11-20 12:49:24.176258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.018 [2024-11-20 12:49:24.176264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.018 [2024-11-20 12:49:24.176269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.018 [2024-11-20 12:49:24.177520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.018 [2024-11-20 12:49:24.177521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.018 [2024-11-20 12:49:24.242742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:19.018 [2024-11-20 12:49:24.243035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:19.018 [2024-11-20 12:49:24.243349] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:19.018 5000+0 records in 00:35:19.018 5000+0 records out 00:35:19.018 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0172966 s, 592 MB/s 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 AIO0 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 [2024-11-20 12:49:24.378301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.018 [2024-11-20 12:49:24.418680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1190895 0 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1190895 0 idle 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:19.018 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190895 root 20 0 128.2g 44800 33152 S 6.2 0.0 0:00.25 reactor_0' 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190895 root 20 0 128.2g 44800 33152 S 6.2 0.0 0:00.25 reactor_0 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1190895 1 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1190895 1 idle 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:19.019 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190937 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:00.00 reactor_1' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190937 root 20 0 128.2g 44800 33152 S 0.0 0.0 0:00.00 reactor_1 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1190990 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1190895 0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1190895 0 busy 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190895 root 20 0 128.2g 45696 33152 R 13.3 0.0 0:00.27 reactor_0' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190895 root 20 0 128.2g 45696 33152 R 13.3 0.0 0:00.27 reactor_0 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:19.277 12:49:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:20.651 12:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:20.651 12:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:20.651 12:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:20.651 12:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190895 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:02.63 reactor_0' 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190895 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:02.63 reactor_0 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1190895 1 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1190895 1 busy 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190937 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:01.38 reactor_1' 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190937 root 20 0 128.2g 45696 33152 R 99.9 0.0 0:01.38 reactor_1 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:20.651 12:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1190990 00:35:30.619 Initializing NVMe Controllers 00:35:30.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:30.619 Controller IO queue size 256, less than required. 00:35:30.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:30.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:30.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:30.619 Initialization complete. Launching workers. 00:35:30.619 ======================================================== 00:35:30.619 Latency(us) 00:35:30.619 Device Information : IOPS MiB/s Average min max 00:35:30.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 17918.72 69.99 14295.00 3252.67 29165.24 00:35:30.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18206.82 71.12 14064.54 7185.81 25629.93 00:35:30.619 ======================================================== 00:35:30.619 Total : 36125.53 141.12 14178.85 3252.67 29165.24 00:35:30.619 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1190895 0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1190895 0 idle 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190895 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:20.23 reactor_0' 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190895 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:20.23 reactor_0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1190895 1 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1190895 1 idle 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190937 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:10.00 reactor_1' 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190937 root 20 0 128.2g 45696 33152 S 0.0 0.0 0:10.00 reactor_1 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:30.619 12:49:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1190895 0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1190895 0 idle 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190895 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:20.49 reactor_0' 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190895 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:20.49 reactor_0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:32.525 12:49:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1190895 1 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1190895 1 idle 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1190895 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1190895 -w 256 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1190937 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:10.11 reactor_1' 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1190937 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:10.11 reactor_1 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:32.525 12:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:32.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.785 rmmod nvme_tcp 00:35:32.785 rmmod nvme_fabrics 00:35:32.785 rmmod nvme_keyring 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.785 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1190895 ']' 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1190895 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1190895 ']' 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1190895 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.786 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1190895 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1190895' 00:35:33.044 killing process with pid 1190895 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1190895 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1190895 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:33.044 12:49:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.579 12:49:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.579 00:35:35.579 real 0m23.148s 00:35:35.579 user 0m40.285s 00:35:35.579 sys 0m8.067s 00:35:35.579 12:49:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.579 12:49:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.579 ************************************ 00:35:35.579 END TEST nvmf_interrupt 00:35:35.579 ************************************ 00:35:35.579 00:35:35.579 real 28m7.547s 00:35:35.579 user 58m43.260s 00:35:35.579 sys 9m15.913s 00:35:35.579 12:49:40 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.579 12:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.579 ************************************ 00:35:35.579 END TEST nvmf_tcp 00:35:35.579 ************************************ 00:35:35.579 12:49:40 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:35.579 12:49:40 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:35.579 12:49:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:35.579 12:49:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.579 12:49:40 -- common/autotest_common.sh@10 -- # set +x 00:35:35.579 ************************************ 00:35:35.579 START TEST spdkcli_nvmf_tcp 00:35:35.579 ************************************ 00:35:35.579 12:49:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:35.579 * Looking for test storage... 00:35:35.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.579 --rc genhtml_branch_coverage=1 00:35:35.579 --rc genhtml_function_coverage=1 00:35:35.579 --rc genhtml_legend=1 00:35:35.579 --rc geninfo_all_blocks=1 00:35:35.579 --rc geninfo_unexecuted_blocks=1 00:35:35.579 00:35:35.579 ' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.579 --rc genhtml_branch_coverage=1 00:35:35.579 --rc genhtml_function_coverage=1 00:35:35.579 --rc genhtml_legend=1 00:35:35.579 --rc geninfo_all_blocks=1 00:35:35.579 --rc geninfo_unexecuted_blocks=1 00:35:35.579 00:35:35.579 ' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.579 --rc genhtml_branch_coverage=1 00:35:35.579 --rc genhtml_function_coverage=1 00:35:35.579 --rc genhtml_legend=1 00:35:35.579 --rc geninfo_all_blocks=1 00:35:35.579 --rc geninfo_unexecuted_blocks=1 00:35:35.579 00:35:35.579 ' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:35.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.579 --rc genhtml_branch_coverage=1 00:35:35.579 --rc genhtml_function_coverage=1 00:35:35.579 --rc genhtml_legend=1 00:35:35.579 --rc geninfo_all_blocks=1 00:35:35.579 --rc geninfo_unexecuted_blocks=1 00:35:35.579 00:35:35.579 ' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.579 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:35.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1194039 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1194039 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1194039 ']' 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.580 12:49:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.580 [2024-11-20 12:49:41.245355] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:35:35.580 [2024-11-20 12:49:41.245399] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194039 ] 00:35:35.580 [2024-11-20 12:49:41.316233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:35.839 [2024-11-20 12:49:41.356016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.839 [2024-11-20 12:49:41.356016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:36.406 12:49:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:36.406 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:36.406 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:36.406 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:36.406 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:36.406 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:36.406 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:36.406 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:36.406 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:36.406 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:36.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:36.406 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:36.406 ' 00:35:39.696 [2024-11-20 12:49:44.767419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.633 [2024-11-20 12:49:46.111919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:43.166 [2024-11-20 12:49:48.599280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:45.068 [2024-11-20 12:49:50.754069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:46.971 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:46.971 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:46.971 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:46.971 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:46.971 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:46.971 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:46.971 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:46.971 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:46.971 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:46.971 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:46.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:46.972 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:46.972 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:46.972 12:49:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:47.230 12:49:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:47.230 12:49:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:47.230 12:49:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:47.230 12:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.230 12:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:47.490 12:49:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:47.490 12:49:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:47.490 12:49:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:47.490 12:49:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:47.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:47.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:47.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:47.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:47.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:47.490 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:47.490 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:47.490 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:47.490 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:47.490 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:47.490 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:47.490 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:47.490 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:47.490 ' 00:35:52.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:52.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:52.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:52.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:52.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:52.762 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:52.762 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.762 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:52.762 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:52.762 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:52.762 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:52.762 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:52.762 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1194039 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1194039 ']' 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1194039 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194039 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194039' 00:35:53.022 killing process with pid 1194039 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1194039 00:35:53.022 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1194039 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1194039 ']' 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1194039 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1194039 ']' 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1194039 00:35:53.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1194039) - No such process 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1194039 is not found' 00:35:53.281 Process with pid 1194039 is not found 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:53.281 00:35:53.281 real 0m17.886s 00:35:53.281 user 0m39.402s 00:35:53.281 sys 0m0.815s 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.281 12:49:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.281 ************************************ 00:35:53.281 END TEST spdkcli_nvmf_tcp 00:35:53.281 ************************************ 00:35:53.281 12:49:58 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:53.281 12:49:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:53.281 12:49:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.281 12:49:58 -- common/autotest_common.sh@10 -- # set +x 00:35:53.281 ************************************ 00:35:53.281 START TEST nvmf_identify_passthru 00:35:53.281 ************************************ 00:35:53.281 12:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:53.281 * Looking for test storage... 00:35:53.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:53.281 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:53.281 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:53.281 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:53.541 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:53.541 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.541 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.541 --rc genhtml_branch_coverage=1 00:35:53.541 --rc genhtml_function_coverage=1 00:35:53.541 --rc genhtml_legend=1 00:35:53.541 --rc geninfo_all_blocks=1 00:35:53.541 --rc geninfo_unexecuted_blocks=1 00:35:53.541 00:35:53.541 ' 00:35:53.541 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.541 --rc genhtml_branch_coverage=1 00:35:53.541 --rc genhtml_function_coverage=1 00:35:53.541 --rc genhtml_legend=1 00:35:53.541 --rc geninfo_all_blocks=1 00:35:53.541 --rc geninfo_unexecuted_blocks=1 00:35:53.541 00:35:53.541 ' 00:35:53.541 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.541 --rc genhtml_branch_coverage=1 00:35:53.541 --rc genhtml_function_coverage=1 00:35:53.541 --rc genhtml_legend=1 00:35:53.541 --rc geninfo_all_blocks=1 00:35:53.541 --rc geninfo_unexecuted_blocks=1 00:35:53.541 00:35:53.541 ' 00:35:53.541 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.541 --rc genhtml_branch_coverage=1 00:35:53.541 --rc genhtml_function_coverage=1 00:35:53.541 --rc genhtml_legend=1 00:35:53.541 --rc geninfo_all_blocks=1 00:35:53.541 --rc geninfo_unexecuted_blocks=1 00:35:53.541 00:35:53.541 ' 00:35:53.541 12:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.541 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.541 12:49:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.541 12:49:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.541 12:49:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.541 12:49:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.541 12:49:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:53.542 12:49:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.542 12:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.542 12:49:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.542 12:49:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.542 12:49:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.542 12:49:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.542 12:49:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.542 12:49:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.542 12:49:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.542 12:49:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:53.542 12:49:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.542 12:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.542 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:53.542 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:53.542 12:49:59 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.542 12:49:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:36:00.118 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:36:00.118 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.118 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:36:00.119 Found net devices under 0000:1a:00.0: cvl_0_0 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:36:00.119 Found net devices under 0000:1a:00.1: cvl_0_1 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:00.119 12:50:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:00.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:00.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:36:00.119 00:36:00.119 --- 10.0.0.2 ping statistics --- 00:36:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.119 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:00.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:00.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:36:00.119 00:36:00.119 --- 10.0.0.1 ping statistics --- 00:36:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.119 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:00.119 12:50:05 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 0000:d9:00.0 00:36:00.119 12:50:05 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:00.119 12:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:05.392 12:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9512048J2P0BGN 00:36:05.392 12:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:05.392 12:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:05.392 12:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1202087 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:10.654 12:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1202087 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1202087 ']' 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.654 12:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 [2024-11-20 12:50:15.732883] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:36:10.654 [2024-11-20 12:50:15.732929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:10.654 [2024-11-20 12:50:15.809878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:10.654 [2024-11-20 12:50:15.850051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:10.654 [2024-11-20 12:50:15.850090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:10.654 [2024-11-20 12:50:15.850096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:10.654 [2024-11-20 12:50:15.850102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:10.654 [2024-11-20 12:50:15.850106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:10.654 [2024-11-20 12:50:15.851652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.654 [2024-11-20 12:50:15.851763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:10.654 [2024-11-20 12:50:15.851877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.654 [2024-11-20 12:50:15.851878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:10.913 12:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.913 INFO: Log level set to 20 00:36:10.913 INFO: Requests: 00:36:10.913 { 00:36:10.913 "jsonrpc": "2.0", 00:36:10.913 "method": "nvmf_set_config", 00:36:10.913 "id": 1, 00:36:10.913 "params": { 00:36:10.913 "admin_cmd_passthru": { 00:36:10.913 "identify_ctrlr": true 00:36:10.913 } 00:36:10.913 } 00:36:10.913 } 00:36:10.913 00:36:10.913 INFO: response: 00:36:10.913 { 00:36:10.913 "jsonrpc": "2.0", 00:36:10.913 "id": 1, 00:36:10.913 "result": true 00:36:10.913 } 00:36:10.913 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.913 12:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.913 INFO: Setting log level to 20 00:36:10.913 INFO: Setting log level to 20 00:36:10.913 INFO: Log level set to 20 00:36:10.913 INFO: Log level set to 20 00:36:10.913 INFO: Requests: 00:36:10.913 { 00:36:10.913 "jsonrpc": "2.0", 00:36:10.913 "method": "framework_start_init", 00:36:10.913 "id": 1 00:36:10.913 } 00:36:10.913 00:36:10.913 INFO: Requests: 00:36:10.913 { 00:36:10.913 "jsonrpc": "2.0", 00:36:10.913 "method": "framework_start_init", 00:36:10.913 "id": 1 00:36:10.913 } 00:36:10.913 00:36:10.913 [2024-11-20 12:50:16.642654] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:10.913 INFO: response: 00:36:10.913 { 00:36:10.913 "jsonrpc": "2.0", 00:36:10.913 "id": 1, 00:36:10.913 "result": true 00:36:10.913 } 00:36:10.913 00:36:10.913 INFO: response: 00:36:10.913 { 00:36:10.913 "jsonrpc": "2.0", 00:36:10.913 "id": 1, 00:36:10.913 "result": true 00:36:10.913 } 00:36:10.913 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.913 12:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.913 INFO: Setting log level to 40 00:36:10.913 INFO: Setting log level to 40 00:36:10.913 INFO: Setting log level to 40 00:36:10.913 [2024-11-20 12:50:16.655905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.913 12:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.913 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:11.172 12:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:11.172 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.172 12:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.452 Nvme0n1 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.452 [2024-11-20 12:50:19.574162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.452 [ 00:36:14.452 { 00:36:14.452 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:14.452 "subtype": "Discovery", 00:36:14.452 "listen_addresses": [], 00:36:14.452 "allow_any_host": true, 00:36:14.452 "hosts": [] 00:36:14.452 }, 00:36:14.452 { 00:36:14.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.452 "subtype": "NVMe", 00:36:14.452 "listen_addresses": [ 00:36:14.452 { 00:36:14.452 "trtype": "TCP", 00:36:14.452 "adrfam": "IPv4", 00:36:14.452 "traddr": "10.0.0.2", 00:36:14.452 "trsvcid": "4420" 00:36:14.452 } 00:36:14.452 ], 00:36:14.452 "allow_any_host": true, 00:36:14.452 "hosts": [], 00:36:14.452 "serial_number": "SPDK00000000000001", 00:36:14.452 "model_number": "SPDK bdev Controller", 00:36:14.452 "max_namespaces": 1, 00:36:14.452 "min_cntlid": 1, 00:36:14.452 "max_cntlid": 65519, 00:36:14.452 "namespaces": [ 00:36:14.452 { 00:36:14.452 "nsid": 1, 00:36:14.452 "bdev_name": "Nvme0n1", 00:36:14.452 "name": "Nvme0n1", 00:36:14.452 "nguid": "C229805341A44E20AD68F5567EFF1417", 00:36:14.452 "uuid": "c2298053-41a4-4e20-ad68-f5567eff1417" 00:36:14.452 } 00:36:14.452 ] 00:36:14.452 } 00:36:14.452 ] 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9512048J2P0BGN 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ9512048J2P0BGN '!=' PHLJ9512048J2P0BGN ']' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.452 12:50:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:14.452 12:50:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:14.452 12:50:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:14.452 12:50:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:14.452 12:50:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:14.452 12:50:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:14.452 12:50:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:14.452 12:50:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:14.452 rmmod nvme_tcp 00:36:14.452 rmmod nvme_fabrics 00:36:14.452 rmmod nvme_keyring 00:36:14.452 12:50:20 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:14.452 12:50:20 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:14.452 12:50:20 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:14.452 12:50:20 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1202087 ']' 00:36:14.452 12:50:20 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1202087 00:36:14.452 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1202087 ']' 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1202087 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1202087 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1202087' 00:36:14.453 killing process with pid 1202087 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1202087 00:36:14.453 12:50:20 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1202087 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:16.980 12:50:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.980 12:50:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:16.980 12:50:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.885 12:50:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:18.885 00:36:18.885 real 0m25.696s 00:36:18.885 user 0m35.421s 00:36:18.885 sys 0m6.553s 00:36:18.885 12:50:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.885 12:50:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:18.885 ************************************ 00:36:18.885 END TEST nvmf_identify_passthru 00:36:18.885 ************************************ 00:36:19.144 12:50:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:19.144 12:50:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:19.144 12:50:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.144 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:36:19.144 ************************************ 00:36:19.144 START TEST nvmf_dif 00:36:19.144 ************************************ 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:19.144 * Looking for test storage... 00:36:19.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:19.144 12:50:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:19.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.144 --rc genhtml_branch_coverage=1 00:36:19.144 --rc genhtml_function_coverage=1 00:36:19.144 --rc genhtml_legend=1 00:36:19.144 --rc geninfo_all_blocks=1 00:36:19.144 --rc geninfo_unexecuted_blocks=1 00:36:19.144 00:36:19.144 ' 00:36:19.144 12:50:24 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:19.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.144 --rc genhtml_branch_coverage=1 00:36:19.144 --rc genhtml_function_coverage=1 00:36:19.144 --rc genhtml_legend=1 00:36:19.144 --rc geninfo_all_blocks=1 00:36:19.144 --rc geninfo_unexecuted_blocks=1 00:36:19.144 00:36:19.144 ' 00:36:19.145 12:50:24 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.145 --rc genhtml_branch_coverage=1 00:36:19.145 --rc genhtml_function_coverage=1 00:36:19.145 --rc genhtml_legend=1 00:36:19.145 --rc geninfo_all_blocks=1 00:36:19.145 --rc geninfo_unexecuted_blocks=1 00:36:19.145 00:36:19.145 ' 00:36:19.145 12:50:24 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.145 --rc genhtml_branch_coverage=1 00:36:19.145 --rc genhtml_function_coverage=1 00:36:19.145 --rc genhtml_legend=1 00:36:19.145 --rc geninfo_all_blocks=1 00:36:19.145 --rc geninfo_unexecuted_blocks=1 00:36:19.145 00:36:19.145 ' 00:36:19.145 12:50:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:19.145 12:50:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:19.145 12:50:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.145 12:50:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.145 12:50:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.145 12:50:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.145 12:50:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.145 12:50:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.145 12:50:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:19.145 12:50:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:19.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:19.145 12:50:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:19.404 12:50:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:19.404 12:50:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:19.404 12:50:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:19.404 12:50:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:19.404 12:50:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.404 12:50:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:19.404 12:50:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:19.404 12:50:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:19.404 12:50:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:36:25.979 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:36:25.979 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:36:25.979 Found net devices under 0000:1a:00.0: cvl_0_0 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:36:25.979 Found net devices under 0000:1a:00.1: cvl_0_1 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:25.979 12:50:30 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:25.979 12:50:31 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:25.979 12:50:31 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:25.979 12:50:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:25.979 12:50:31 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:25.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:25.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:36:25.979 00:36:25.979 --- 10.0.0.2 ping statistics --- 00:36:25.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.979 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:36:25.979 12:50:31 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:25.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:25.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:36:25.979 00:36:25.979 --- 10.0.0.1 ping statistics --- 00:36:25.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.980 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:36:25.980 12:50:31 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:25.980 12:50:31 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:25.980 12:50:31 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:25.980 12:50:31 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:28.595 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:28.595 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:28.595 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:28.595 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:28.595 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:28.595 12:50:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:28.595 12:50:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1208333 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1208333 00:36:28.595 12:50:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1208333 ']' 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.595 12:50:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.595 [2024-11-20 12:50:34.331061] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:36:28.595 [2024-11-20 12:50:34.331101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.854 [2024-11-20 12:50:34.404490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.854 [2024-11-20 12:50:34.442157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.854 [2024-11-20 12:50:34.442189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.854 [2024-11-20 12:50:34.442195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.854 [2024-11-20 12:50:34.442201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.854 [2024-11-20 12:50:34.442205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.854 [2024-11-20 12:50:34.442770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:28.854 12:50:34 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.854 12:50:34 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.854 12:50:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:28.854 12:50:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.854 [2024-11-20 12:50:34.574776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.854 12:50:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:28.854 12:50:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.854 ************************************ 00:36:28.854 START TEST fio_dif_1_default 00:36:28.854 ************************************ 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.112 bdev_null0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.112 [2024-11-20 12:50:34.655121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.112 { 00:36:29.112 "params": { 00:36:29.112 "name": "Nvme$subsystem", 00:36:29.112 "trtype": "$TEST_TRANSPORT", 00:36:29.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.112 "adrfam": "ipv4", 00:36:29.112 "trsvcid": "$NVMF_PORT", 00:36:29.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.112 "hdgst": ${hdgst:-false}, 00:36:29.112 "ddgst": ${ddgst:-false} 00:36:29.112 }, 00:36:29.112 "method": "bdev_nvme_attach_controller" 00:36:29.112 } 00:36:29.112 EOF 00:36:29.112 )") 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:29.112 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:29.113 "params": { 00:36:29.113 "name": "Nvme0", 00:36:29.113 "trtype": "tcp", 00:36:29.113 "traddr": "10.0.0.2", 00:36:29.113 "adrfam": "ipv4", 00:36:29.113 "trsvcid": "4420", 00:36:29.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.113 "hdgst": false, 00:36:29.113 "ddgst": false 00:36:29.113 }, 00:36:29.113 "method": "bdev_nvme_attach_controller" 00:36:29.113 }' 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:29.113 12:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.370 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:29.370 fio-3.35 00:36:29.370 Starting 1 thread 00:36:41.563 00:36:41.563 filename0: (groupid=0, jobs=1): err= 0: pid=1208761: Wed Nov 20 12:50:45 2024 00:36:41.563 read: IOPS=236, BW=946KiB/s (969kB/s)(9488KiB/10025msec) 00:36:41.563 slat (nsec): min=4969, max=34514, avg=5382.92, stdev=1042.42 00:36:41.563 clat (usec): min=335, max=45597, avg=16890.08, stdev=19825.87 00:36:41.563 lat (usec): min=340, max=45623, avg=16895.47, stdev=19825.85 00:36:41.563 clat percentiles (usec): 00:36:41.563 | 1.00th=[ 343], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:36:41.563 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[40633], 00:36:41.563 | 70.00th=[40633], 80.00th=[40633], 90.00th=[40633], 95.00th=[40633], 00:36:41.563 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:36:41.563 | 99.99th=[45351] 00:36:41.563 bw ( KiB/s): min= 832, max= 1088, per=100.00%, avg=947.10, stdev=73.69, samples=20 00:36:41.563 iops : min= 208, max= 272, avg=236.75, stdev=18.42, samples=20 00:36:41.563 lat (usec) : 500=58.98%, 750=0.04% 00:36:41.563 lat (msec) : 50=40.98% 00:36:41.563 cpu : usr=92.10%, sys=7.63%, ctx=14, majf=0, minf=245 00:36:41.563 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.563 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.563 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:41.563 00:36:41.563 Run status group 0 (all jobs): 00:36:41.563 READ: bw=946KiB/s (969kB/s), 946KiB/s-946KiB/s (969kB/s-969kB/s), io=9488KiB (9716kB), run=10025-10025msec 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.563 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 00:36:41.564 real 0m11.322s 00:36:41.564 user 0m19.364s 00:36:41.564 sys 0m1.129s 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.564 12:50:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 ************************************ 00:36:41.564 END TEST fio_dif_1_default 00:36:41.564 ************************************ 00:36:41.564 12:50:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:41.564 12:50:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:41.564 12:50:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.564 12:50:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 ************************************ 00:36:41.564 START TEST fio_dif_1_multi_subsystems 00:36:41.564 ************************************ 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 bdev_null0 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 [2024-11-20 12:50:46.048489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 bdev_null1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:41.564 { 00:36:41.564 "params": { 00:36:41.564 "name": "Nvme$subsystem", 00:36:41.564 "trtype": "$TEST_TRANSPORT", 00:36:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.564 "adrfam": "ipv4", 00:36:41.564 "trsvcid": "$NVMF_PORT", 00:36:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.564 "hdgst": ${hdgst:-false}, 00:36:41.564 "ddgst": ${ddgst:-false} 00:36:41.564 }, 00:36:41.564 "method": "bdev_nvme_attach_controller" 00:36:41.564 } 00:36:41.564 EOF 00:36:41.564 )") 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:41.564 { 00:36:41.564 "params": { 00:36:41.564 "name": "Nvme$subsystem", 00:36:41.564 "trtype": "$TEST_TRANSPORT", 00:36:41.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.564 "adrfam": "ipv4", 00:36:41.564 "trsvcid": "$NVMF_PORT", 00:36:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.564 "hdgst": ${hdgst:-false}, 00:36:41.564 "ddgst": ${ddgst:-false} 00:36:41.564 }, 00:36:41.564 "method": "bdev_nvme_attach_controller" 00:36:41.564 } 00:36:41.564 EOF 00:36:41.564 )") 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:41.564 "params": { 00:36:41.564 "name": "Nvme0", 00:36:41.564 "trtype": "tcp", 00:36:41.564 "traddr": "10.0.0.2", 00:36:41.564 "adrfam": "ipv4", 00:36:41.564 "trsvcid": "4420", 00:36:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:41.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:41.564 "hdgst": false, 00:36:41.564 "ddgst": false 00:36:41.564 }, 00:36:41.564 "method": "bdev_nvme_attach_controller" 00:36:41.564 },{ 00:36:41.564 "params": { 00:36:41.564 "name": "Nvme1", 00:36:41.564 "trtype": "tcp", 00:36:41.564 "traddr": "10.0.0.2", 00:36:41.564 "adrfam": "ipv4", 00:36:41.564 "trsvcid": "4420", 00:36:41.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:41.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:41.564 "hdgst": false, 00:36:41.564 "ddgst": false 00:36:41.564 }, 00:36:41.564 "method": "bdev_nvme_attach_controller" 00:36:41.564 }' 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:41.564 12:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.564 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.564 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.564 fio-3.35 00:36:41.564 Starting 2 threads 00:36:51.529 00:36:51.529 filename0: (groupid=0, jobs=1): err= 0: pid=1210801: Wed Nov 20 12:50:57 2024 00:36:51.529 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:36:51.529 slat (nsec): min=5494, max=36587, avg=10974.77, stdev=3032.47 00:36:51.529 clat (usec): min=40783, max=41838, avg=40974.68, stdev=65.79 00:36:51.529 lat (usec): min=40795, max=41859, avg=40985.65, stdev=65.53 00:36:51.529 clat percentiles (usec): 00:36:51.529 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:51.529 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:51.529 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:51.529 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:51.529 | 99.99th=[41681] 00:36:51.529 bw ( KiB/s): min= 384, max= 416, per=49.85%, avg=389.05, stdev=11.99, samples=19 00:36:51.529 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:36:51.529 lat (msec) : 50=100.00% 00:36:51.529 cpu : usr=96.96%, sys=2.80%, ctx=15, majf=0, minf=213 00:36:51.529 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.529 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.529 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:51.529 filename1: (groupid=0, jobs=1): err= 0: pid=1210802: Wed Nov 20 12:50:57 2024 00:36:51.529 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:36:51.529 slat (nsec): min=5485, max=36032, avg=10288.84, stdev=2677.46 00:36:51.529 clat (usec): min=40731, max=41843, avg=40977.06, stdev=62.27 00:36:51.529 lat (usec): min=40744, max=41863, avg=40987.35, stdev=62.10 00:36:51.529 clat percentiles (usec): 00:36:51.529 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:51.529 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:51.529 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:51.529 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:51.529 | 99.99th=[41681] 00:36:51.529 bw ( KiB/s): min= 384, max= 416, per=49.85%, avg=389.05, stdev=11.99, samples=19 00:36:51.529 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:36:51.529 lat (msec) : 50=100.00% 00:36:51.529 cpu : usr=96.85%, sys=2.91%, ctx=13, majf=0, minf=138 00:36:51.529 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.529 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.529 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:51.529 00:36:51.529 Run status group 0 (all jobs): 00:36:51.529 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (400kB/s-400kB/s), io=7808KiB (7995kB), run=10005-10005msec 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 00:36:51.787 real 0m11.426s 00:36:51.787 user 0m29.017s 00:36:51.787 sys 0m0.899s 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 ************************************ 00:36:51.787 END TEST fio_dif_1_multi_subsystems 00:36:51.787 ************************************ 00:36:51.787 12:50:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:51.787 12:50:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:51.787 12:50:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 ************************************ 00:36:51.787 START TEST fio_dif_rand_params 00:36:51.787 ************************************ 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 bdev_null0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:51.787 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.045 [2024-11-20 12:50:57.550150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.045 { 00:36:52.045 "params": { 00:36:52.045 "name": "Nvme$subsystem", 00:36:52.045 "trtype": "$TEST_TRANSPORT", 00:36:52.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.045 "adrfam": "ipv4", 00:36:52.045 "trsvcid": "$NVMF_PORT", 00:36:52.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.045 "hdgst": ${hdgst:-false}, 00:36:52.045 "ddgst": ${ddgst:-false} 00:36:52.045 }, 00:36:52.045 "method": "bdev_nvme_attach_controller" 00:36:52.045 } 00:36:52.045 EOF 00:36:52.045 )") 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:52.045 "params": { 00:36:52.045 "name": "Nvme0", 00:36:52.045 "trtype": "tcp", 00:36:52.045 "traddr": "10.0.0.2", 00:36:52.045 "adrfam": "ipv4", 00:36:52.045 "trsvcid": "4420", 00:36:52.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.045 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.045 "hdgst": false, 00:36:52.045 "ddgst": false 00:36:52.045 }, 00:36:52.045 "method": "bdev_nvme_attach_controller" 00:36:52.045 }' 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.045 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:52.046 12:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.303 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:52.303 ... 00:36:52.303 fio-3.35 00:36:52.303 Starting 3 threads 00:36:58.858 00:36:58.858 filename0: (groupid=0, jobs=1): err= 0: pid=1212984: Wed Nov 20 12:51:03 2024 00:36:58.858 read: IOPS=343, BW=42.9MiB/s (45.0MB/s)(216MiB/5043msec) 00:36:58.858 slat (nsec): min=5507, max=51890, avg=9967.56, stdev=2225.77 00:36:58.858 clat (usec): min=3255, max=48592, avg=8709.07, stdev=6205.45 00:36:58.858 lat (usec): min=3263, max=48598, avg=8719.03, stdev=6205.29 00:36:58.858 clat percentiles (usec): 00:36:58.858 | 1.00th=[ 5080], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 6980], 00:36:58.858 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8094], 00:36:58.858 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:36:58.858 | 99.00th=[47449], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:36:58.858 | 99.99th=[48497] 00:36:58.858 bw ( KiB/s): min=33792, max=51456, per=32.96%, avg=44236.80, stdev=6952.23, samples=10 00:36:58.858 iops : min= 264, max= 402, avg=345.60, stdev=54.31, samples=10 00:36:58.858 lat (msec) : 4=0.17%, 10=96.36%, 20=0.92%, 50=2.54% 00:36:58.858 cpu : usr=95.68%, sys=4.03%, ctx=7, majf=0, minf=102 00:36:58.858 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.858 issued rwts: total=1730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:58.858 filename0: (groupid=0, jobs=1): err= 0: pid=1212985: Wed Nov 20 12:51:03 2024 00:36:58.858 read: IOPS=359, BW=44.9MiB/s (47.1MB/s)(226MiB/5043msec) 00:36:58.858 slat (nsec): min=5547, max=28490, avg=10364.05, stdev=2012.50 00:36:58.858 clat (usec): min=3082, max=49973, avg=8318.27, stdev=4878.29 00:36:58.858 lat (usec): min=3088, max=49984, avg=8328.63, stdev=4878.30 00:36:58.858 clat percentiles (usec): 00:36:58.858 | 1.00th=[ 3425], 5.00th=[ 4293], 10.00th=[ 5538], 20.00th=[ 6915], 00:36:58.858 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:36:58.858 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[ 9896], 00:36:58.858 | 99.00th=[45351], 99.50th=[47973], 99.90th=[49546], 99.95th=[50070], 00:36:58.858 | 99.99th=[50070] 00:36:58.858 bw ( KiB/s): min=41984, max=51712, per=34.51%, avg=46310.40, stdev=2774.19, samples=10 00:36:58.858 iops : min= 328, max= 404, avg=361.80, stdev=21.67, samples=10 00:36:58.858 lat (msec) : 4=3.92%, 10=91.88%, 20=2.76%, 50=1.44% 00:36:58.858 cpu : usr=94.84%, sys=4.86%, ctx=15, majf=0, minf=91 00:36:58.858 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.858 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:58.858 filename0: (groupid=0, jobs=1): err= 0: pid=1212986: Wed Nov 20 12:51:03 2024 00:36:58.858 read: IOPS=346, BW=43.3MiB/s (45.4MB/s)(218MiB/5042msec) 00:36:58.858 slat (nsec): min=5532, max=28164, avg=9739.86, stdev=1954.53 00:36:58.858 clat (usec): min=3033, max=48553, avg=8627.91, stdev=4865.20 00:36:58.858 lat (usec): min=3039, max=48563, avg=8637.65, stdev=4865.22 00:36:58.858 clat percentiles (usec): 00:36:58.858 | 1.00th=[ 3425], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 7111], 00:36:58.858 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 00:36:58.858 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10290], 00:36:58.858 | 99.00th=[45351], 99.50th=[46924], 99.90th=[47973], 99.95th=[48497], 00:36:58.858 | 99.99th=[48497] 00:36:58.858 bw ( KiB/s): min=33792, max=50176, per=33.27%, avg=44646.40, stdev=4398.10, samples=10 00:36:58.858 iops : min= 264, max= 392, avg=348.80, stdev=34.36, samples=10 00:36:58.858 lat (msec) : 4=2.52%, 10=89.63%, 20=6.36%, 50=1.49% 00:36:58.858 cpu : usr=95.54%, sys=4.17%, ctx=16, majf=0, minf=100 00:36:58.858 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.858 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.858 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:58.858 00:36:58.858 Run status group 0 (all jobs): 00:36:58.858 READ: bw=131MiB/s (137MB/s), 42.9MiB/s-44.9MiB/s (45.0MB/s-47.1MB/s), io=661MiB (693MB), run=5042-5043msec 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:58.858 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 bdev_null0 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 [2024-11-20 12:51:03.750178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 bdev_null1 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 bdev_null2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:58.859 { 00:36:58.859 "params": { 00:36:58.859 "name": "Nvme$subsystem", 00:36:58.859 "trtype": "$TEST_TRANSPORT", 00:36:58.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.859 "adrfam": "ipv4", 00:36:58.859 "trsvcid": "$NVMF_PORT", 00:36:58.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.859 "hdgst": ${hdgst:-false}, 00:36:58.859 "ddgst": ${ddgst:-false} 00:36:58.859 }, 00:36:58.859 "method": "bdev_nvme_attach_controller" 00:36:58.859 } 00:36:58.859 EOF 00:36:58.859 )") 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:58.859 { 00:36:58.859 "params": { 00:36:58.859 "name": "Nvme$subsystem", 00:36:58.859 "trtype": "$TEST_TRANSPORT", 00:36:58.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.859 "adrfam": "ipv4", 00:36:58.859 "trsvcid": "$NVMF_PORT", 00:36:58.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.859 "hdgst": ${hdgst:-false}, 00:36:58.859 "ddgst": ${ddgst:-false} 00:36:58.859 }, 00:36:58.859 "method": "bdev_nvme_attach_controller" 00:36:58.859 } 00:36:58.859 EOF 00:36:58.859 )") 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:58.859 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:58.860 { 00:36:58.860 "params": { 00:36:58.860 "name": "Nvme$subsystem", 00:36:58.860 "trtype": "$TEST_TRANSPORT", 00:36:58.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.860 "adrfam": "ipv4", 00:36:58.860 "trsvcid": "$NVMF_PORT", 00:36:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.860 "hdgst": ${hdgst:-false}, 00:36:58.860 "ddgst": ${ddgst:-false} 00:36:58.860 }, 00:36:58.860 "method": "bdev_nvme_attach_controller" 00:36:58.860 } 00:36:58.860 EOF 00:36:58.860 )") 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:58.860 "params": { 00:36:58.860 "name": "Nvme0", 00:36:58.860 "trtype": "tcp", 00:36:58.860 "traddr": "10.0.0.2", 00:36:58.860 "adrfam": "ipv4", 00:36:58.860 "trsvcid": "4420", 00:36:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.860 "hdgst": false, 00:36:58.860 "ddgst": false 00:36:58.860 }, 00:36:58.860 "method": "bdev_nvme_attach_controller" 00:36:58.860 },{ 00:36:58.860 "params": { 00:36:58.860 "name": "Nvme1", 00:36:58.860 "trtype": "tcp", 00:36:58.860 "traddr": "10.0.0.2", 00:36:58.860 "adrfam": "ipv4", 00:36:58.860 "trsvcid": "4420", 00:36:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:58.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:58.860 "hdgst": false, 00:36:58.860 "ddgst": false 00:36:58.860 }, 00:36:58.860 "method": "bdev_nvme_attach_controller" 00:36:58.860 },{ 00:36:58.860 "params": { 00:36:58.860 "name": "Nvme2", 00:36:58.860 "trtype": "tcp", 00:36:58.860 "traddr": "10.0.0.2", 00:36:58.860 "adrfam": "ipv4", 00:36:58.860 "trsvcid": "4420", 00:36:58.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:58.860 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:58.860 "hdgst": false, 00:36:58.860 "ddgst": false 00:36:58.860 }, 00:36:58.860 "method": "bdev_nvme_attach_controller" 00:36:58.860 }' 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:58.860 12:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.860 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:58.860 ... 00:36:58.860 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:58.860 ... 00:36:58.860 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:58.860 ... 00:36:58.860 fio-3.35 00:36:58.860 Starting 24 threads 00:37:11.052 00:37:11.052 filename0: (groupid=0, jobs=1): err= 0: pid=1214301: Wed Nov 20 12:51:15 2024 00:37:11.052 read: IOPS=635, BW=2544KiB/s (2605kB/s)(24.9MiB/10013msec) 00:37:11.052 slat (nsec): min=6875, max=83464, avg=33967.89, stdev=19645.18 00:37:11.052 clat (usec): min=2206, max=29487, avg=24893.88, stdev=2655.92 00:37:11.052 lat (usec): min=2221, max=29519, avg=24927.85, stdev=2656.86 00:37:11.052 clat percentiles (usec): 00:37:11.052 | 1.00th=[ 8717], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.052 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[27395], 00:37:11.053 | 99.00th=[27919], 99.50th=[27919], 99.90th=[29492], 99.95th=[29492], 00:37:11.053 | 99.99th=[29492] 00:37:11.053 bw ( KiB/s): min= 2427, max= 3200, per=4.20%, avg=2539.95, stdev=172.71, samples=20 00:37:11.053 iops : min= 606, max= 800, avg=634.90, stdev=43.20, samples=20 00:37:11.053 lat (msec) : 4=0.25%, 10=1.11%, 20=0.64%, 50=97.99% 00:37:11.053 cpu : usr=98.25%, sys=1.16%, ctx=76, majf=0, minf=53 00:37:11.053 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214302: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=627, BW=2512KiB/s (2572kB/s)(24.6MiB/10013msec) 00:37:11.053 slat (nsec): min=5916, max=85982, avg=39705.35, stdev=16567.88 00:37:11.053 clat (usec): min=17831, max=32522, avg=25136.40, stdev=1097.68 00:37:11.053 lat (usec): min=17840, max=32566, avg=25176.11, stdev=1101.43 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:37:11.053 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.053 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28967], 99.95th=[29230], 00:37:11.053 | 99.99th=[32637] 00:37:11.053 bw ( KiB/s): min= 2427, max= 2688, per=4.15%, avg=2508.25, stdev=77.10, samples=20 00:37:11.053 iops : min= 606, max= 672, avg=626.95, stdev=19.33, samples=20 00:37:11.053 lat (msec) : 20=0.40%, 50=99.60% 00:37:11.053 cpu : usr=98.68%, sys=0.95%, ctx=13, majf=0, minf=25 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214303: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=628, BW=2513KiB/s (2574kB/s)(24.6MiB/10008msec) 00:37:11.053 slat (nsec): min=4599, max=86022, avg=37420.50, stdev=16722.05 00:37:11.053 clat (usec): min=7757, max=35577, avg=25123.28, stdev=1501.75 00:37:11.053 lat (usec): min=7765, max=35591, avg=25160.70, stdev=1503.54 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:11.053 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.053 | 99.00th=[27919], 99.50th=[28705], 99.90th=[35390], 99.95th=[35390], 00:37:11.053 | 99.99th=[35390] 00:37:11.053 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2506.00, stdev=77.27, samples=19 00:37:11.053 iops : min= 576, max= 640, avg=626.47, stdev=19.30, samples=19 00:37:11.053 lat (msec) : 10=0.25%, 20=0.25%, 50=99.49% 00:37:11.053 cpu : usr=98.86%, sys=0.76%, ctx=14, majf=0, minf=27 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214304: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10006msec) 00:37:11.053 slat (nsec): min=5800, max=85999, avg=34726.72, stdev=17861.05 00:37:11.053 clat (usec): min=12744, max=29532, avg=25173.64, stdev=1230.32 00:37:11.053 lat (usec): min=12753, max=29566, avg=25208.37, stdev=1230.54 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:37:11.053 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26870], 95.00th=[27395], 00:37:11.053 | 99.00th=[27919], 99.50th=[28181], 99.90th=[29230], 99.95th=[29492], 00:37:11.053 | 99.99th=[29492] 00:37:11.053 bw ( KiB/s): min= 2304, max= 2688, per=4.17%, avg=2518.68, stdev=95.90, samples=19 00:37:11.053 iops : min= 576, max= 672, avg=629.58, stdev=24.00, samples=19 00:37:11.053 lat (msec) : 20=0.51%, 50=99.49% 00:37:11.053 cpu : usr=98.76%, sys=0.85%, ctx=14, majf=0, minf=22 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214305: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10005msec) 00:37:11.053 slat (nsec): min=5985, max=95958, avg=44206.87, stdev=14768.48 00:37:11.053 clat (usec): min=7941, max=37565, avg=25061.31, stdev=1595.47 00:37:11.053 lat (usec): min=7949, max=37584, avg=25105.51, stdev=1597.67 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.053 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.053 | 99.00th=[27919], 99.50th=[28443], 99.90th=[37487], 99.95th=[37487], 00:37:11.053 | 99.99th=[37487] 00:37:11.053 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2505.53, stdev=88.68, samples=19 00:37:11.053 iops : min= 576, max= 672, avg=626.32, stdev=22.19, samples=19 00:37:11.053 lat (msec) : 10=0.25%, 20=0.25%, 50=99.49% 00:37:11.053 cpu : usr=98.61%, sys=0.98%, ctx=12, majf=0, minf=21 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214306: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10006msec) 00:37:11.053 slat (nsec): min=6706, max=86185, avg=45314.88, stdev=14906.48 00:37:11.053 clat (usec): min=13417, max=28985, avg=25056.92, stdev=1187.31 00:37:11.053 lat (usec): min=13433, max=29012, avg=25102.23, stdev=1189.86 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.053 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.053 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:37:11.053 | 99.99th=[28967] 00:37:11.053 bw ( KiB/s): min= 2304, max= 2688, per=4.16%, avg=2512.21, stdev=97.47, samples=19 00:37:11.053 iops : min= 576, max= 672, avg=627.95, stdev=24.37, samples=19 00:37:11.053 lat (msec) : 20=0.25%, 50=99.75% 00:37:11.053 cpu : usr=98.78%, sys=0.83%, ctx=18, majf=0, minf=25 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214307: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=631, BW=2526KiB/s (2587kB/s)(24.7MiB/10007msec) 00:37:11.053 slat (nsec): min=5609, max=87436, avg=29128.64, stdev=17750.72 00:37:11.053 clat (usec): min=6497, max=29199, avg=25097.14, stdev=1885.34 00:37:11.053 lat (usec): min=6507, max=29224, avg=25126.27, stdev=1887.37 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[13435], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:37:11.053 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[27132], 00:37:11.053 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28967], 99.95th=[29230], 00:37:11.053 | 99.99th=[29230] 00:37:11.053 bw ( KiB/s): min= 2304, max= 2944, per=4.19%, avg=2531.89, stdev=132.36, samples=19 00:37:11.053 iops : min= 576, max= 736, avg=632.84, stdev=33.15, samples=19 00:37:11.053 lat (msec) : 10=0.51%, 20=0.76%, 50=98.73% 00:37:11.053 cpu : usr=98.64%, sys=0.88%, ctx=43, majf=0, minf=45 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.053 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.053 filename0: (groupid=0, jobs=1): err= 0: pid=1214308: Wed Nov 20 12:51:15 2024 00:37:11.053 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10006msec) 00:37:11.053 slat (nsec): min=6581, max=83469, avg=37351.12, stdev=18900.66 00:37:11.053 clat (usec): min=13340, max=29258, avg=25172.67, stdev=1193.60 00:37:11.053 lat (usec): min=13361, max=29283, avg=25210.02, stdev=1193.50 00:37:11.053 clat percentiles (usec): 00:37:11.053 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:37:11.053 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:37:11.053 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[27132], 00:37:11.053 | 99.00th=[27919], 99.50th=[27919], 99.90th=[29230], 99.95th=[29230], 00:37:11.053 | 99.99th=[29230] 00:37:11.053 bw ( KiB/s): min= 2304, max= 2688, per=4.16%, avg=2512.21, stdev=97.47, samples=19 00:37:11.053 iops : min= 576, max= 672, avg=627.95, stdev=24.37, samples=19 00:37:11.053 lat (msec) : 20=0.25%, 50=99.75% 00:37:11.053 cpu : usr=98.01%, sys=1.24%, ctx=92, majf=0, minf=46 00:37:11.053 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214309: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=627, BW=2509KiB/s (2569kB/s)(24.5MiB/10001msec) 00:37:11.054 slat (nsec): min=6085, max=96896, avg=29040.26, stdev=16724.81 00:37:11.054 clat (usec): min=15134, max=50765, avg=25285.31, stdev=1413.30 00:37:11.054 lat (usec): min=15177, max=50784, avg=25314.35, stdev=1412.91 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:37:11.054 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:37:11.054 | 99.00th=[27919], 99.50th=[29492], 99.90th=[37487], 99.95th=[38011], 00:37:11.054 | 99.99th=[50594] 00:37:11.054 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2505.53, stdev=98.41, samples=19 00:37:11.054 iops : min= 576, max= 672, avg=626.32, stdev=24.62, samples=19 00:37:11.054 lat (msec) : 20=0.41%, 50=99.55%, 100=0.03% 00:37:11.054 cpu : usr=98.50%, sys=0.92%, ctx=94, majf=0, minf=31 00:37:11.054 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214310: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=635, BW=2542KiB/s (2603kB/s)(24.8MiB/10005msec) 00:37:11.054 slat (nsec): min=5539, max=83069, avg=25068.22, stdev=18525.59 00:37:11.054 clat (usec): min=4091, max=55803, avg=25008.14, stdev=3033.62 00:37:11.054 lat (usec): min=4096, max=55826, avg=25033.21, stdev=3036.63 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[13173], 5.00th=[20317], 10.00th=[23725], 20.00th=[24249], 00:37:11.054 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:37:11.054 | 99.00th=[35914], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:37:11.054 | 99.99th=[55837] 00:37:11.054 bw ( KiB/s): min= 2352, max= 2736, per=4.20%, avg=2537.47, stdev=92.19, samples=19 00:37:11.054 iops : min= 588, max= 684, avg=634.32, stdev=23.02, samples=19 00:37:11.054 lat (msec) : 10=0.42%, 20=4.07%, 50=95.47%, 100=0.03% 00:37:11.054 cpu : usr=98.84%, sys=0.74%, ctx=17, majf=0, minf=26 00:37:11.054 IO depths : 1=0.7%, 2=3.3%, 4=11.0%, 8=70.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=91.4%, 8=5.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214311: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=630, BW=2520KiB/s (2581kB/s)(24.6MiB/10005msec) 00:37:11.054 slat (nsec): min=6222, max=87664, avg=39930.19, stdev=16226.67 00:37:11.054 clat (usec): min=6410, max=29199, avg=25050.10, stdev=1538.94 00:37:11.054 lat (usec): min=6420, max=29238, avg=25090.03, stdev=1542.83 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[18482], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:37:11.054 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.054 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:37:11.054 | 99.99th=[29230] 00:37:11.054 bw ( KiB/s): min= 2304, max= 2816, per=4.18%, avg=2525.63, stdev=111.48, samples=19 00:37:11.054 iops : min= 576, max= 704, avg=631.32, stdev=27.89, samples=19 00:37:11.054 lat (msec) : 10=0.22%, 20=0.82%, 50=98.95% 00:37:11.054 cpu : usr=98.74%, sys=0.85%, ctx=18, majf=0, minf=32 00:37:11.054 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214312: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=627, BW=2508KiB/s (2568kB/s)(24.5MiB/10002msec) 00:37:11.054 slat (nsec): min=5624, max=85806, avg=38279.07, stdev=15984.17 00:37:11.054 clat (usec): min=14833, max=46562, avg=25191.13, stdev=1361.41 00:37:11.054 lat (usec): min=14844, max=46577, avg=25229.41, stdev=1362.50 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:37:11.054 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.054 | 99.00th=[27919], 99.50th=[28705], 99.90th=[38011], 99.95th=[38011], 00:37:11.054 | 99.99th=[46400] 00:37:11.054 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2505.53, stdev=98.41, samples=19 00:37:11.054 iops : min= 576, max= 672, avg=626.32, stdev=24.62, samples=19 00:37:11.054 lat (msec) : 20=0.30%, 50=99.70% 00:37:11.054 cpu : usr=98.79%, sys=0.82%, ctx=16, majf=0, minf=41 00:37:11.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214313: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=631, BW=2524KiB/s (2585kB/s)(24.7MiB/10015msec) 00:37:11.054 slat (nsec): min=6007, max=87611, avg=35435.68, stdev=18177.20 00:37:11.054 clat (usec): min=8665, max=29115, avg=25058.17, stdev=1695.25 00:37:11.054 lat (usec): min=8681, max=29153, avg=25093.60, stdev=1698.66 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[15270], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:37:11.054 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.054 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:37:11.054 | 99.99th=[29230] 00:37:11.054 bw ( KiB/s): min= 2427, max= 2816, per=4.17%, avg=2520.75, stdev=102.58, samples=20 00:37:11.054 iops : min= 606, max= 704, avg=630.10, stdev=25.67, samples=20 00:37:11.054 lat (msec) : 10=0.25%, 20=1.01%, 50=98.73% 00:37:11.054 cpu : usr=98.27%, sys=1.16%, ctx=78, majf=0, minf=30 00:37:11.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214314: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=630, BW=2523KiB/s (2584kB/s)(24.7MiB/10018msec) 00:37:11.054 slat (nsec): min=5798, max=87161, avg=42363.61, stdev=14666.73 00:37:11.054 clat (usec): min=8556, max=32538, avg=25015.62, stdev=1674.12 00:37:11.054 lat (usec): min=8573, max=32583, avg=25057.98, stdev=1676.15 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[14222], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.054 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.054 | 99.00th=[27919], 99.50th=[27919], 99.90th=[29230], 99.95th=[29230], 00:37:11.054 | 99.99th=[32637] 00:37:11.054 bw ( KiB/s): min= 2304, max= 2816, per=4.17%, avg=2521.05, stdev=111.09, samples=20 00:37:11.054 iops : min= 576, max= 704, avg=630.15, stdev=27.83, samples=20 00:37:11.054 lat (msec) : 10=0.25%, 20=0.79%, 50=98.96% 00:37:11.054 cpu : usr=98.60%, sys=0.95%, ctx=54, majf=0, minf=35 00:37:11.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214315: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10005msec) 00:37:11.054 slat (nsec): min=7261, max=90009, avg=40370.60, stdev=16275.91 00:37:11.054 clat (usec): min=5574, max=43752, avg=25100.41, stdev=1859.44 00:37:11.054 lat (usec): min=5587, max=43773, avg=25140.78, stdev=1862.08 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:37:11.054 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.054 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.054 | 99.00th=[27919], 99.50th=[28443], 99.90th=[43779], 99.95th=[43779], 00:37:11.054 | 99.99th=[43779] 00:37:11.054 bw ( KiB/s): min= 2427, max= 2688, per=4.15%, avg=2505.53, stdev=77.75, samples=19 00:37:11.054 iops : min= 606, max= 672, avg=626.32, stdev=19.46, samples=19 00:37:11.054 lat (msec) : 10=0.40%, 20=0.21%, 50=99.40% 00:37:11.054 cpu : usr=98.72%, sys=0.89%, ctx=14, majf=0, minf=35 00:37:11.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.054 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.054 filename1: (groupid=0, jobs=1): err= 0: pid=1214316: Wed Nov 20 12:51:15 2024 00:37:11.054 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10005msec) 00:37:11.054 slat (nsec): min=6181, max=86028, avg=39258.40, stdev=17139.01 00:37:11.054 clat (usec): min=5590, max=43821, avg=25100.31, stdev=1861.01 00:37:11.054 lat (usec): min=5596, max=43844, avg=25139.57, stdev=1863.93 00:37:11.054 clat percentiles (usec): 00:37:11.054 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:37:11.054 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.055 | 99.00th=[27919], 99.50th=[28443], 99.90th=[43779], 99.95th=[43779], 00:37:11.055 | 99.99th=[43779] 00:37:11.055 bw ( KiB/s): min= 2427, max= 2688, per=4.15%, avg=2505.53, stdev=77.75, samples=19 00:37:11.055 iops : min= 606, max= 672, avg=626.32, stdev=19.46, samples=19 00:37:11.055 lat (msec) : 10=0.40%, 20=0.17%, 50=99.43% 00:37:11.055 cpu : usr=98.77%, sys=0.84%, ctx=15, majf=0, minf=31 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214317: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=627, BW=2508KiB/s (2568kB/s)(24.5MiB/10002msec) 00:37:11.055 slat (nsec): min=4596, max=86008, avg=37327.53, stdev=16834.54 00:37:11.055 clat (usec): min=14925, max=38069, avg=25172.88, stdev=1302.59 00:37:11.055 lat (usec): min=14938, max=38082, avg=25210.21, stdev=1304.40 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:11.055 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.055 | 99.00th=[27919], 99.50th=[28705], 99.90th=[38011], 99.95th=[38011], 00:37:11.055 | 99.99th=[38011] 00:37:11.055 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2505.53, stdev=98.41, samples=19 00:37:11.055 iops : min= 576, max= 672, avg=626.32, stdev=24.62, samples=19 00:37:11.055 lat (msec) : 20=0.29%, 50=99.71% 00:37:11.055 cpu : usr=98.66%, sys=0.96%, ctx=13, majf=0, minf=26 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214318: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=630, BW=2523KiB/s (2584kB/s)(24.7MiB/10018msec) 00:37:11.055 slat (nsec): min=6496, max=95682, avg=42707.79, stdev=15756.23 00:37:11.055 clat (usec): min=8553, max=29021, avg=25008.20, stdev=1662.80 00:37:11.055 lat (usec): min=8564, max=29069, avg=25050.91, stdev=1664.92 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[14353], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.055 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.055 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28705], 99.95th=[28967], 00:37:11.055 | 99.99th=[28967] 00:37:11.055 bw ( KiB/s): min= 2304, max= 2816, per=4.17%, avg=2521.05, stdev=111.09, samples=20 00:37:11.055 iops : min= 576, max= 704, avg=630.15, stdev=27.83, samples=20 00:37:11.055 lat (msec) : 10=0.25%, 20=0.76%, 50=98.99% 00:37:11.055 cpu : usr=98.19%, sys=1.11%, ctx=149, majf=0, minf=32 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214319: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=628, BW=2514KiB/s (2574kB/s)(24.6MiB/10006msec) 00:37:11.055 slat (nsec): min=7495, max=95904, avg=43379.93, stdev=13480.98 00:37:11.055 clat (usec): min=13394, max=29104, avg=25094.90, stdev=1184.97 00:37:11.055 lat (usec): min=13409, max=29167, avg=25138.28, stdev=1186.94 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.055 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.055 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:37:11.055 | 99.99th=[29230] 00:37:11.055 bw ( KiB/s): min= 2304, max= 2688, per=4.16%, avg=2512.21, stdev=97.47, samples=19 00:37:11.055 iops : min= 576, max= 672, avg=627.95, stdev=24.37, samples=19 00:37:11.055 lat (msec) : 20=0.25%, 50=99.75% 00:37:11.055 cpu : usr=98.70%, sys=0.92%, ctx=15, majf=0, minf=31 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214320: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=628, BW=2513KiB/s (2574kB/s)(24.6MiB/10008msec) 00:37:11.055 slat (nsec): min=5900, max=83163, avg=36589.99, stdev=15308.79 00:37:11.055 clat (usec): min=7829, max=35433, avg=25139.83, stdev=1496.42 00:37:11.055 lat (usec): min=7859, max=35451, avg=25176.42, stdev=1497.97 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:11.055 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.055 | 99.00th=[27919], 99.50th=[28705], 99.90th=[35390], 99.95th=[35390], 00:37:11.055 | 99.99th=[35390] 00:37:11.055 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2506.00, stdev=77.27, samples=19 00:37:11.055 iops : min= 576, max= 640, avg=626.47, stdev=19.30, samples=19 00:37:11.055 lat (msec) : 10=0.25%, 20=0.25%, 50=99.49% 00:37:11.055 cpu : usr=98.42%, sys=1.03%, ctx=75, majf=0, minf=24 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214321: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=635, BW=2543KiB/s (2604kB/s)(24.9MiB/10017msec) 00:37:11.055 slat (nsec): min=5526, max=79860, avg=15396.38, stdev=13700.54 00:37:11.055 clat (usec): min=2035, max=29055, avg=25040.74, stdev=2673.17 00:37:11.055 lat (usec): min=2050, max=29082, avg=25056.14, stdev=2673.24 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[ 5473], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:37:11.055 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:37:11.055 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:37:11.055 | 99.00th=[27919], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:37:11.055 | 99.99th=[28967] 00:37:11.055 bw ( KiB/s): min= 2432, max= 3200, per=4.21%, avg=2540.20, stdev=172.55, samples=20 00:37:11.055 iops : min= 608, max= 800, avg=635.00, stdev=43.13, samples=20 00:37:11.055 lat (msec) : 4=0.25%, 10=1.01%, 20=0.75%, 50=97.99% 00:37:11.055 cpu : usr=98.67%, sys=0.93%, ctx=55, majf=0, minf=55 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214322: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=627, BW=2508KiB/s (2568kB/s)(24.5MiB/10002msec) 00:37:11.055 slat (nsec): min=6075, max=80686, avg=34619.00, stdev=16153.91 00:37:11.055 clat (usec): min=14975, max=46726, avg=25235.30, stdev=1404.92 00:37:11.055 lat (usec): min=14988, max=46743, avg=25269.92, stdev=1405.65 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[23462], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:37:11.055 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27132], 00:37:11.055 | 99.00th=[27919], 99.50th=[28705], 99.90th=[38011], 99.95th=[46400], 00:37:11.055 | 99.99th=[46924] 00:37:11.055 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2505.26, stdev=98.63, samples=19 00:37:11.055 iops : min= 576, max= 672, avg=626.21, stdev=24.71, samples=19 00:37:11.055 lat (msec) : 20=0.33%, 50=99.67% 00:37:11.055 cpu : usr=98.56%, sys=1.03%, ctx=17, majf=0, minf=27 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.055 filename2: (groupid=0, jobs=1): err= 0: pid=1214323: Wed Nov 20 12:51:15 2024 00:37:11.055 read: IOPS=628, BW=2513KiB/s (2574kB/s)(24.6MiB/10007msec) 00:37:11.055 slat (nsec): min=6153, max=95847, avg=43675.90, stdev=13834.43 00:37:11.055 clat (usec): min=13394, max=29153, avg=25082.51, stdev=1183.45 00:37:11.055 lat (usec): min=13403, max=29188, avg=25126.19, stdev=1185.65 00:37:11.055 clat percentiles (usec): 00:37:11.055 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:37:11.055 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:37:11.055 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27132], 00:37:11.055 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:37:11.055 | 99.99th=[29230] 00:37:11.055 bw ( KiB/s): min= 2304, max= 2688, per=4.16%, avg=2511.95, stdev=97.32, samples=19 00:37:11.055 iops : min= 576, max= 672, avg=627.89, stdev=24.34, samples=19 00:37:11.055 lat (msec) : 20=0.25%, 50=99.75% 00:37:11.055 cpu : usr=98.88%, sys=0.75%, ctx=16, majf=0, minf=43 00:37:11.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.055 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.056 filename2: (groupid=0, jobs=1): err= 0: pid=1214324: Wed Nov 20 12:51:15 2024 00:37:11.056 read: IOPS=630, BW=2520KiB/s (2581kB/s)(24.6MiB/10005msec) 00:37:11.056 slat (nsec): min=5991, max=89102, avg=37344.80, stdev=18086.69 00:37:11.056 clat (usec): min=5554, max=43827, avg=25111.98, stdev=2454.39 00:37:11.056 lat (usec): min=5567, max=43846, avg=25149.33, stdev=2457.25 00:37:11.056 clat percentiles (usec): 00:37:11.056 | 1.00th=[16450], 5.00th=[23462], 10.00th=[23725], 20.00th=[24511], 00:37:11.056 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:37:11.056 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:37:11.056 | 99.00th=[32375], 99.50th=[33817], 99.90th=[43779], 99.95th=[43779], 00:37:11.056 | 99.99th=[43779] 00:37:11.056 bw ( KiB/s): min= 2416, max= 2672, per=4.16%, avg=2512.26, stdev=70.17, samples=19 00:37:11.056 iops : min= 604, max= 668, avg=628.00, stdev=17.58, samples=19 00:37:11.056 lat (msec) : 10=0.40%, 20=2.36%, 50=97.24% 00:37:11.056 cpu : usr=98.68%, sys=0.93%, ctx=14, majf=0, minf=29 00:37:11.056 IO depths : 1=1.4%, 2=7.3%, 4=24.1%, 8=56.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:37:11.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.056 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.056 issued rwts: total=6304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.056 00:37:11.056 Run status group 0 (all jobs): 00:37:11.056 READ: bw=59.0MiB/s (61.8MB/s), 2508KiB/s-2544KiB/s (2568kB/s-2605kB/s), io=591MiB (619MB), run=10001-10018msec 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 bdev_null0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 [2024-11-20 12:51:15.657539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 bdev_null1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:11.056 { 00:37:11.056 "params": { 00:37:11.056 "name": "Nvme$subsystem", 00:37:11.056 "trtype": "$TEST_TRANSPORT", 00:37:11.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.056 "adrfam": "ipv4", 00:37:11.056 "trsvcid": "$NVMF_PORT", 00:37:11.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.056 "hdgst": ${hdgst:-false}, 00:37:11.056 "ddgst": ${ddgst:-false} 00:37:11.056 }, 00:37:11.056 "method": "bdev_nvme_attach_controller" 00:37:11.056 } 00:37:11.056 EOF 00:37:11.056 )") 00:37:11.056 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:11.057 { 00:37:11.057 "params": { 00:37:11.057 "name": "Nvme$subsystem", 00:37:11.057 "trtype": "$TEST_TRANSPORT", 00:37:11.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.057 "adrfam": "ipv4", 00:37:11.057 "trsvcid": "$NVMF_PORT", 00:37:11.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.057 "hdgst": ${hdgst:-false}, 00:37:11.057 "ddgst": ${ddgst:-false} 00:37:11.057 }, 00:37:11.057 "method": "bdev_nvme_attach_controller" 00:37:11.057 } 00:37:11.057 EOF 00:37:11.057 )") 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:11.057 "params": { 00:37:11.057 "name": "Nvme0", 00:37:11.057 "trtype": "tcp", 00:37:11.057 "traddr": "10.0.0.2", 00:37:11.057 "adrfam": "ipv4", 00:37:11.057 "trsvcid": "4420", 00:37:11.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.057 "hdgst": false, 00:37:11.057 "ddgst": false 00:37:11.057 }, 00:37:11.057 "method": "bdev_nvme_attach_controller" 00:37:11.057 },{ 00:37:11.057 "params": { 00:37:11.057 "name": "Nvme1", 00:37:11.057 "trtype": "tcp", 00:37:11.057 "traddr": "10.0.0.2", 00:37:11.057 "adrfam": "ipv4", 00:37:11.057 "trsvcid": "4420", 00:37:11.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:11.057 "hdgst": false, 00:37:11.057 "ddgst": false 00:37:11.057 }, 00:37:11.057 "method": "bdev_nvme_attach_controller" 00:37:11.057 }' 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:11.057 12:51:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.057 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:11.057 ... 00:37:11.057 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:11.057 ... 00:37:11.057 fio-3.35 00:37:11.057 Starting 4 threads 00:37:16.325 00:37:16.325 filename0: (groupid=0, jobs=1): err= 0: pid=1216927: Wed Nov 20 12:51:21 2024 00:37:16.325 read: IOPS=3078, BW=24.0MiB/s (25.2MB/s)(120MiB/5001msec) 00:37:16.325 slat (usec): min=5, max=363, avg= 9.43, stdev= 4.89 00:37:16.325 clat (usec): min=617, max=5127, avg=2568.88, stdev=423.51 00:37:16.325 lat (usec): min=637, max=5138, avg=2578.31, stdev=423.57 00:37:16.325 clat percentiles (usec): 00:37:16.325 | 1.00th=[ 1483], 5.00th=[ 1975], 10.00th=[ 2114], 20.00th=[ 2245], 00:37:16.325 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2540], 60.00th=[ 2671], 00:37:16.325 | 70.00th=[ 2769], 80.00th=[ 2868], 90.00th=[ 2999], 95.00th=[ 3228], 00:37:16.325 | 99.00th=[ 3818], 99.50th=[ 4178], 99.90th=[ 4752], 99.95th=[ 4817], 00:37:16.325 | 99.99th=[ 5080] 00:37:16.325 bw ( KiB/s): min=22496, max=26448, per=27.40%, avg=24764.44, stdev=1286.85, samples=9 00:37:16.325 iops : min= 2812, max= 3306, avg=3095.56, stdev=160.86, samples=9 00:37:16.325 lat (usec) : 750=0.07%, 1000=0.34% 00:37:16.325 lat (msec) : 2=5.24%, 4=93.60%, 10=0.75% 00:37:16.325 cpu : usr=95.92%, sys=3.70%, ctx=46, majf=0, minf=100 00:37:16.325 IO depths : 1=0.4%, 2=10.8%, 4=60.0%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.325 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.325 issued rwts: total=15395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.325 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.325 filename0: (groupid=0, jobs=1): err= 0: pid=1216928: Wed Nov 20 12:51:21 2024 00:37:16.325 read: IOPS=2742, BW=21.4MiB/s (22.5MB/s)(107MiB/5002msec) 00:37:16.325 slat (nsec): min=5479, max=49881, avg=9645.15, stdev=4165.64 00:37:16.326 clat (usec): min=679, max=5377, avg=2887.86, stdev=506.32 00:37:16.326 lat (usec): min=689, max=5390, avg=2897.50, stdev=505.89 00:37:16.326 clat percentiles (usec): 00:37:16.326 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:37:16.326 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2900], 00:37:16.326 | 70.00th=[ 2966], 80.00th=[ 3163], 90.00th=[ 3458], 95.00th=[ 3916], 00:37:16.326 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5014], 99.95th=[ 5080], 00:37:16.326 | 99.99th=[ 5342] 00:37:16.326 bw ( KiB/s): min=21008, max=22768, per=24.33%, avg=21987.56, stdev=587.87, samples=9 00:37:16.326 iops : min= 2626, max= 2846, avg=2748.44, stdev=73.48, samples=9 00:37:16.326 lat (usec) : 750=0.01%, 1000=0.04% 00:37:16.326 lat (msec) : 2=1.73%, 4=94.15%, 10=4.07% 00:37:16.326 cpu : usr=97.36%, sys=2.32%, ctx=9, majf=0, minf=62 00:37:16.326 IO depths : 1=0.2%, 2=5.9%, 4=65.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.326 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.326 issued rwts: total=13717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.326 filename1: (groupid=0, jobs=1): err= 0: pid=1216930: Wed Nov 20 12:51:21 2024 00:37:16.326 read: IOPS=2627, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:37:16.326 slat (nsec): min=5393, max=64901, avg=9204.86, stdev=4093.06 00:37:16.326 clat (usec): min=514, max=5504, avg=3017.16, stdev=511.84 00:37:16.326 lat (usec): min=525, max=5510, avg=3026.36, stdev=511.43 00:37:16.326 clat percentiles (usec): 00:37:16.326 | 1.00th=[ 1975], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2704], 00:37:16.326 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:37:16.326 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3687], 95.00th=[ 4047], 00:37:16.326 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5145], 99.95th=[ 5211], 00:37:16.326 | 99.99th=[ 5473] 00:37:16.326 bw ( KiB/s): min=20368, max=21947, per=23.30%, avg=21057.22, stdev=543.82, samples=9 00:37:16.326 iops : min= 2546, max= 2743, avg=2632.11, stdev=67.90, samples=9 00:37:16.326 lat (usec) : 750=0.02%, 1000=0.05% 00:37:16.326 lat (msec) : 2=1.00%, 4=93.48%, 10=5.45% 00:37:16.326 cpu : usr=96.70%, sys=3.02%, ctx=7, majf=0, minf=88 00:37:16.326 IO depths : 1=0.5%, 2=3.6%, 4=69.1%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.326 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.326 issued rwts: total=13139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.326 filename1: (groupid=0, jobs=1): err= 0: pid=1216931: Wed Nov 20 12:51:21 2024 00:37:16.326 read: IOPS=2849, BW=22.3MiB/s (23.3MB/s)(111MiB/5002msec) 00:37:16.326 slat (nsec): min=5464, max=64582, avg=9409.70, stdev=4119.86 00:37:16.326 clat (usec): min=570, max=5179, avg=2778.31, stdev=517.22 00:37:16.326 lat (usec): min=579, max=5191, avg=2787.72, stdev=517.15 00:37:16.326 clat percentiles (usec): 00:37:16.326 | 1.00th=[ 1663], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2409], 00:37:16.326 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2835], 00:37:16.326 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3392], 95.00th=[ 3818], 00:37:16.326 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5145], 00:37:16.326 | 99.99th=[ 5145] 00:37:16.326 bw ( KiB/s): min=20688, max=24896, per=25.12%, avg=22702.22, stdev=1364.00, samples=9 00:37:16.326 iops : min= 2586, max= 3112, avg=2837.78, stdev=170.50, samples=9 00:37:16.326 lat (usec) : 750=0.01%, 1000=0.01% 00:37:16.326 lat (msec) : 2=3.24%, 4=93.29%, 10=3.44% 00:37:16.326 cpu : usr=96.76%, sys=2.96%, ctx=7, majf=0, minf=73 00:37:16.326 IO depths : 1=0.4%, 2=7.7%, 4=63.5%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.326 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.326 issued rwts: total=14255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.326 00:37:16.326 Run status group 0 (all jobs): 00:37:16.326 READ: bw=88.3MiB/s (92.5MB/s), 20.5MiB/s-24.0MiB/s (21.5MB/s-25.2MB/s), io=441MiB (463MB), run=5001-5002msec 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.585 00:37:16.585 real 0m24.670s 00:37:16.585 user 4m59.826s 00:37:16.585 sys 0m4.695s 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.585 12:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.585 ************************************ 00:37:16.585 END TEST fio_dif_rand_params 00:37:16.585 ************************************ 00:37:16.585 12:51:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:16.585 12:51:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:16.585 12:51:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.585 12:51:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:16.585 ************************************ 00:37:16.585 START TEST fio_dif_digest 00:37:16.585 ************************************ 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:16.585 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.586 bdev_null0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.586 [2024-11-20 12:51:22.288175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:16.586 { 00:37:16.586 "params": { 00:37:16.586 "name": "Nvme$subsystem", 00:37:16.586 "trtype": "$TEST_TRANSPORT", 00:37:16.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.586 "adrfam": "ipv4", 00:37:16.586 "trsvcid": "$NVMF_PORT", 00:37:16.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.586 "hdgst": ${hdgst:-false}, 00:37:16.586 "ddgst": ${ddgst:-false} 00:37:16.586 }, 00:37:16.586 "method": "bdev_nvme_attach_controller" 00:37:16.586 } 00:37:16.586 EOF 00:37:16.586 )") 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:16.586 "params": { 00:37:16.586 "name": "Nvme0", 00:37:16.586 "trtype": "tcp", 00:37:16.586 "traddr": "10.0.0.2", 00:37:16.586 "adrfam": "ipv4", 00:37:16.586 "trsvcid": "4420", 00:37:16.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.586 "hdgst": true, 00:37:16.586 "ddgst": true 00:37:16.586 }, 00:37:16.586 "method": "bdev_nvme_attach_controller" 00:37:16.586 }' 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:16.586 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:16.867 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:16.867 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:16.867 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:16.867 12:51:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.130 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:17.130 ... 00:37:17.130 fio-3.35 00:37:17.130 Starting 3 threads 00:37:29.321 00:37:29.321 filename0: (groupid=0, jobs=1): err= 0: pid=1218139: Wed Nov 20 12:51:33 2024 00:37:29.321 read: IOPS=312, BW=39.0MiB/s (40.9MB/s)(392MiB/10045msec) 00:37:29.321 slat (nsec): min=5710, max=43927, avg=13486.24, stdev=6221.04 00:37:29.321 clat (usec): min=5862, max=51430, avg=9583.38, stdev=1251.46 00:37:29.321 lat (usec): min=5869, max=51441, avg=9596.87, stdev=1251.45 00:37:29.321 clat percentiles (usec): 00:37:29.321 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8586], 20.00th=[ 8979], 00:37:29.321 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:37:29.321 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:37:29.321 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12387], 99.95th=[46400], 00:37:29.321 | 99.99th=[51643] 00:37:29.321 bw ( KiB/s): min=36608, max=41984, per=34.71%, avg=40102.40, stdev=1422.38, samples=20 00:37:29.321 iops : min= 286, max= 328, avg=313.30, stdev=11.11, samples=20 00:37:29.321 lat (msec) : 10=74.74%, 20=25.20%, 50=0.03%, 100=0.03% 00:37:29.321 cpu : usr=95.63%, sys=4.05%, ctx=30, majf=0, minf=181 00:37:29.321 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.321 issued rwts: total=3135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:29.321 filename0: (groupid=0, jobs=1): err= 0: pid=1218140: Wed Nov 20 12:51:33 2024 00:37:29.321 read: IOPS=291, BW=36.5MiB/s (38.3MB/s)(367MiB/10043msec) 00:37:29.321 slat (nsec): min=5688, max=57153, avg=15447.17, stdev=6444.33 00:37:29.321 clat (usec): min=5605, max=49170, avg=10244.75, stdev=1286.56 00:37:29.321 lat (usec): min=5612, max=49178, avg=10260.20, stdev=1286.65 00:37:29.321 clat percentiles (usec): 00:37:29.321 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:37:29.321 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:37:29.321 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:37:29.321 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13566], 99.95th=[43254], 00:37:29.321 | 99.99th=[49021] 00:37:29.321 bw ( KiB/s): min=33792, max=39424, per=32.46%, avg=37504.00, stdev=1420.50, samples=20 00:37:29.321 iops : min= 264, max= 308, avg=293.00, stdev=11.10, samples=20 00:37:29.321 lat (msec) : 10=42.53%, 20=57.40%, 50=0.07% 00:37:29.321 cpu : usr=95.90%, sys=3.78%, ctx=35, majf=0, minf=129 00:37:29.321 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.321 issued rwts: total=2932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:29.321 filename0: (groupid=0, jobs=1): err= 0: pid=1218141: Wed Nov 20 12:51:33 2024 00:37:29.321 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(375MiB/10044msec) 00:37:29.321 slat (nsec): min=5604, max=44444, avg=13460.16, stdev=6479.89 00:37:29.321 clat (usec): min=6241, max=52016, avg=10015.16, stdev=1800.43 00:37:29.321 lat (usec): min=6253, max=52041, avg=10028.62, stdev=1800.57 00:37:29.321 clat percentiles (usec): 00:37:29.321 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9241], 00:37:29.321 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:37:29.321 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11469], 00:37:29.321 | 99.00th=[12125], 99.50th=[12518], 99.90th=[52167], 99.95th=[52167], 00:37:29.321 | 99.99th=[52167] 00:37:29.321 bw ( KiB/s): min=34816, max=40192, per=33.21%, avg=38374.40, stdev=1504.00, samples=20 00:37:29.321 iops : min= 272, max= 314, avg=299.80, stdev=11.75, samples=20 00:37:29.321 lat (msec) : 10=57.70%, 20=42.13%, 50=0.07%, 100=0.10% 00:37:29.321 cpu : usr=95.62%, sys=4.07%, ctx=17, majf=0, minf=136 00:37:29.321 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.321 issued rwts: total=3000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:29.321 00:37:29.321 Run status group 0 (all jobs): 00:37:29.321 READ: bw=113MiB/s (118MB/s), 36.5MiB/s-39.0MiB/s (38.3MB/s-40.9MB/s), io=1133MiB (1188MB), run=10043-10045msec 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.321 00:37:29.321 real 0m11.230s 00:37:29.321 user 0m38.042s 00:37:29.321 sys 0m1.635s 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.321 12:51:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:29.321 ************************************ 00:37:29.321 END TEST fio_dif_digest 00:37:29.321 ************************************ 00:37:29.321 12:51:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:29.321 12:51:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:29.321 rmmod nvme_tcp 00:37:29.321 rmmod nvme_fabrics 00:37:29.321 rmmod nvme_keyring 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1208333 ']' 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1208333 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1208333 ']' 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1208333 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1208333 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1208333' 00:37:29.321 killing process with pid 1208333 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1208333 00:37:29.321 12:51:33 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1208333 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:29.321 12:51:33 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:31.227 Waiting for block devices as requested 00:37:31.227 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:37:31.486 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:31.486 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:31.745 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:31.745 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:31.745 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:32.003 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:32.003 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:32.003 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:32.003 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:32.261 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:37:32.261 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:32.519 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:32.519 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:32.519 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:32.519 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:32.778 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:32.778 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:32.778 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:33.037 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:33.037 12:51:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.037 12:51:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:33.037 12:51:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.573 12:51:40 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:35.573 00:37:35.573 real 1m16.097s 00:37:35.573 user 7m27.013s 00:37:35.573 sys 0m20.799s 00:37:35.573 12:51:40 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.573 12:51:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:35.573 ************************************ 00:37:35.573 END TEST nvmf_dif 00:37:35.573 ************************************ 00:37:35.573 12:51:40 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:35.573 12:51:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:35.573 12:51:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.573 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:37:35.573 ************************************ 00:37:35.573 START TEST nvmf_abort_qd_sizes 00:37:35.573 ************************************ 00:37:35.573 12:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:35.573 * Looking for test storage... 00:37:35.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:35.573 12:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:35.573 12:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:35.573 12:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.573 --rc genhtml_branch_coverage=1 00:37:35.573 --rc genhtml_function_coverage=1 00:37:35.573 --rc genhtml_legend=1 00:37:35.573 --rc geninfo_all_blocks=1 00:37:35.573 --rc geninfo_unexecuted_blocks=1 00:37:35.573 00:37:35.573 ' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.573 --rc genhtml_branch_coverage=1 00:37:35.573 --rc genhtml_function_coverage=1 00:37:35.573 --rc genhtml_legend=1 00:37:35.573 --rc geninfo_all_blocks=1 00:37:35.573 --rc geninfo_unexecuted_blocks=1 00:37:35.573 00:37:35.573 ' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.573 --rc genhtml_branch_coverage=1 00:37:35.573 --rc genhtml_function_coverage=1 00:37:35.573 --rc genhtml_legend=1 00:37:35.573 --rc geninfo_all_blocks=1 00:37:35.573 --rc geninfo_unexecuted_blocks=1 00:37:35.573 00:37:35.573 ' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.573 --rc genhtml_branch_coverage=1 00:37:35.573 --rc genhtml_function_coverage=1 00:37:35.573 --rc genhtml_legend=1 00:37:35.573 --rc geninfo_all_blocks=1 00:37:35.573 --rc geninfo_unexecuted_blocks=1 00:37:35.573 00:37:35.573 ' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.573 12:51:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:35.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:35.574 12:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.0 (0x8086 - 0x159b)' 00:37:42.143 Found 0000:1a:00.0 (0x8086 - 0x159b) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:1a:00.1 (0x8086 - 0x159b)' 00:37:42.143 Found 0000:1a:00.1 (0x8086 - 0x159b) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.0: cvl_0_0' 00:37:42.143 Found net devices under 0000:1a:00.0: cvl_0_0 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.143 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:1a:00.1: cvl_0_1' 00:37:42.143 Found net devices under 0000:1a:00.1: cvl_0_1 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:42.144 12:51:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:42.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:37:42.144 00:37:42.144 --- 10.0.0.2 ping statistics --- 00:37:42.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.144 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:42.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:37:42.144 00:37:42.144 --- 10.0.0.1 ping statistics --- 00:37:42.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.144 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:42.144 12:51:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:45.436 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:45.436 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:46.004 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:37:46.942 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:37:46.942 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:46.942 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1226894 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1226894 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1226894 ']' 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:47.201 12:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.201 [2024-11-20 12:51:52.825504] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:37:47.201 [2024-11-20 12:51:52.825544] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:47.201 [2024-11-20 12:51:52.902623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:47.201 [2024-11-20 12:51:52.943247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.201 [2024-11-20 12:51:52.943284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.201 [2024-11-20 12:51:52.943291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:47.201 [2024-11-20 12:51:52.943296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.201 [2024-11-20 12:51:52.943301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.201 [2024-11-20 12:51:52.944924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.201 [2024-11-20 12:51:52.945060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:47.201 [2024-11-20 12:51:52.945175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.201 [2024-11-20 12:51:52.945176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 0000:d9:00.0 ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d9:00.0 ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 4 )) 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 0000:d9:00.0 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 4 > 0 )) 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.130 12:51:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.130 ************************************ 00:37:48.130 START TEST spdk_target_abort 00:37:48.130 ************************************ 00:37:48.130 12:51:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:48.130 12:51:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:48.130 12:51:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:37:48.130 12:51:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.130 12:51:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.401 spdk_targetn1 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.401 [2024-11-20 12:51:56.575763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:51.401 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.402 [2024-11-20 12:51:56.609191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:51.402 12:51:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:54.671 Initializing NVMe Controllers 00:37:54.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:54.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:54.671 Initialization complete. Launching workers. 00:37:54.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10809, failed: 0 00:37:54.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1159, failed to submit 9650 00:37:54.671 success 518, unsuccessful 641, failed 0 00:37:54.671 12:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:54.671 12:51:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:58.055 Initializing NVMe Controllers 00:37:58.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:58.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:58.055 Initialization complete. Launching workers. 00:37:58.055 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8894, failed: 0 00:37:58.055 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1224, failed to submit 7670 00:37:58.055 success 339, unsuccessful 885, failed 0 00:37:58.056 12:52:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:58.056 12:52:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:00.577 Initializing NVMe Controllers 00:38:00.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:00.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:00.577 Initialization complete. Launching workers. 00:38:00.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40573, failed: 0 00:38:00.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2817, failed to submit 37756 00:38:00.577 success 600, unsuccessful 2217, failed 0 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.577 12:52:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1226894 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1226894 ']' 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1226894 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226894 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:03.120 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226894' 00:38:03.121 killing process with pid 1226894 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1226894 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1226894 00:38:03.121 00:38:03.121 real 0m14.868s 00:38:03.121 user 0m59.415s 00:38:03.121 sys 0m2.422s 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:03.121 ************************************ 00:38:03.121 END TEST spdk_target_abort 00:38:03.121 ************************************ 00:38:03.121 12:52:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:03.121 12:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:03.121 12:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.121 12:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:03.121 ************************************ 00:38:03.121 START TEST kernel_target_abort 00:38:03.121 ************************************ 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:03.121 12:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:06.409 Waiting for block devices as requested 00:38:06.409 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:38:06.409 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:06.409 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:06.668 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:06.668 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:06.668 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:06.668 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:06.927 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:06.927 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:06.927 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:07.187 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:38:07.187 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:07.187 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:07.446 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:07.446 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:07.446 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:07.705 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:07.705 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:07.705 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:07.705 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:07.965 No valid GPT data, bailing 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:38:07.965 No valid GPT data, bailing 00:38:07.965 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme2n1 ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme2n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme2n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme2n1 pt 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:38:08.224 No valid GPT data, bailing 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme2n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme3n1 ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme3n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme3n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme3n1 pt 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:38:08.224 No valid GPT data, bailing 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme3n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme3n1 ]] 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme3n1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 --hostid=005363bc-ad7e-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:08.224 00:38:08.224 Discovery Log Number of Records 2, Generation counter 2 00:38:08.224 =====Discovery Log Entry 0====== 00:38:08.224 trtype: tcp 00:38:08.224 adrfam: ipv4 00:38:08.224 subtype: current discovery subsystem 00:38:08.224 treq: not specified, sq flow control disable supported 00:38:08.224 portid: 1 00:38:08.224 trsvcid: 4420 00:38:08.224 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:08.224 traddr: 10.0.0.1 00:38:08.224 eflags: none 00:38:08.224 sectype: none 00:38:08.224 =====Discovery Log Entry 1====== 00:38:08.224 trtype: tcp 00:38:08.224 adrfam: ipv4 00:38:08.224 subtype: nvme subsystem 00:38:08.224 treq: not specified, sq flow control disable supported 00:38:08.224 portid: 1 00:38:08.224 trsvcid: 4420 00:38:08.224 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:08.224 traddr: 10.0.0.1 00:38:08.224 eflags: none 00:38:08.224 sectype: none 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:08.224 12:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:11.512 Initializing NVMe Controllers 00:38:11.512 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:11.512 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:11.512 Initialization complete. Launching workers. 00:38:11.512 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85453, failed: 0 00:38:11.512 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 85453, failed to submit 0 00:38:11.512 success 0, unsuccessful 85453, failed 0 00:38:11.512 12:52:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:11.512 12:52:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:14.799 Initializing NVMe Controllers 00:38:14.799 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:14.799 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:14.799 Initialization complete. Launching workers. 00:38:14.799 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 156393, failed: 0 00:38:14.799 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30630, failed to submit 125763 00:38:14.799 success 0, unsuccessful 30630, failed 0 00:38:14.799 12:52:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:14.799 12:52:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:18.086 Initializing NVMe Controllers 00:38:18.086 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:18.086 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:18.086 Initialization complete. Launching workers. 00:38:18.086 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 140531, failed: 0 00:38:18.086 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35178, failed to submit 105353 00:38:18.086 success 0, unsuccessful 35178, failed 0 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:18.086 12:52:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:21.376 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:21.376 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:21.376 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:21.376 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:21.376 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:21.376 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:21.376 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:21.377 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:22.314 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:38:22.882 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:38:22.882 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:23.142 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:38:23.142 00:38:23.142 real 0m20.156s 00:38:23.142 user 0m9.020s 00:38:23.142 sys 0m6.245s 00:38:23.142 12:52:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.142 12:52:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.142 ************************************ 00:38:23.142 END TEST kernel_target_abort 00:38:23.142 ************************************ 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:23.142 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:23.142 rmmod nvme_tcp 00:38:23.401 rmmod nvme_fabrics 00:38:23.401 rmmod nvme_keyring 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1226894 ']' 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1226894 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1226894 ']' 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1226894 00:38:23.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1226894) - No such process 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1226894 is not found' 00:38:23.401 Process with pid 1226894 is not found 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:23.401 12:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:26.692 Waiting for block devices as requested 00:38:26.692 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:38:26.692 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:26.692 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:26.692 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:26.952 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:26.952 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:26.952 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:27.211 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:27.211 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:27.211 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:27.470 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:38:27.470 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:27.470 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:27.728 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:27.728 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:27.729 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:27.729 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:27.988 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:27.988 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:27.988 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:28.247 12:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.785 12:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.785 00:38:30.785 real 0m55.079s 00:38:30.785 user 1m13.391s 00:38:30.785 sys 0m18.518s 00:38:30.785 12:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.785 12:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:30.785 ************************************ 00:38:30.785 END TEST nvmf_abort_qd_sizes 00:38:30.785 ************************************ 00:38:30.785 12:52:35 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:30.785 12:52:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.785 12:52:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.785 12:52:35 -- common/autotest_common.sh@10 -- # set +x 00:38:30.785 ************************************ 00:38:30.785 START TEST keyring_file 00:38:30.785 ************************************ 00:38:30.785 12:52:36 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:30.785 * Looking for test storage... 00:38:30.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:30.785 12:52:36 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:30.785 12:52:36 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:30.785 12:52:36 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:30.785 12:52:36 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:30.785 12:52:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.785 12:52:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.785 12:52:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.785 12:52:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:30.786 12:52:36 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.786 12:52:36 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:30.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.786 --rc genhtml_branch_coverage=1 00:38:30.786 --rc genhtml_function_coverage=1 00:38:30.786 --rc genhtml_legend=1 00:38:30.786 --rc geninfo_all_blocks=1 00:38:30.786 --rc geninfo_unexecuted_blocks=1 00:38:30.786 00:38:30.786 ' 00:38:30.786 12:52:36 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:30.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.786 --rc genhtml_branch_coverage=1 00:38:30.786 --rc genhtml_function_coverage=1 00:38:30.786 --rc genhtml_legend=1 00:38:30.786 --rc geninfo_all_blocks=1 00:38:30.786 --rc geninfo_unexecuted_blocks=1 00:38:30.786 00:38:30.786 ' 00:38:30.786 12:52:36 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:30.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.786 --rc genhtml_branch_coverage=1 00:38:30.786 --rc genhtml_function_coverage=1 00:38:30.786 --rc genhtml_legend=1 00:38:30.786 --rc geninfo_all_blocks=1 00:38:30.786 --rc geninfo_unexecuted_blocks=1 00:38:30.786 00:38:30.786 ' 00:38:30.786 12:52:36 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:30.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.786 --rc genhtml_branch_coverage=1 00:38:30.786 --rc genhtml_function_coverage=1 00:38:30.786 --rc genhtml_legend=1 00:38:30.786 --rc geninfo_all_blocks=1 00:38:30.786 --rc geninfo_unexecuted_blocks=1 00:38:30.786 00:38:30.786 ' 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.786 12:52:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.786 12:52:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.786 12:52:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.786 12:52:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.786 12:52:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:30.786 12:52:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:30.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fAy2Z46M3R 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:30.786 12:52:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fAy2Z46M3R 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fAy2Z46M3R 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fAy2Z46M3R 00:38:30.786 12:52:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:30.786 12:52:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CwTJ7ZzwOZ 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:30.787 12:52:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:30.787 12:52:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:30.787 12:52:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:30.787 12:52:36 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:30.787 12:52:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:30.787 12:52:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CwTJ7ZzwOZ 00:38:30.787 12:52:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CwTJ7ZzwOZ 00:38:30.787 12:52:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CwTJ7ZzwOZ 00:38:30.787 12:52:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=1237120 00:38:30.787 12:52:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1237120 00:38:30.787 12:52:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:30.787 12:52:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1237120 ']' 00:38:30.787 12:52:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.787 12:52:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:30.787 12:52:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.787 12:52:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:30.787 12:52:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:30.787 [2024-11-20 12:52:36.371616] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:38:30.787 [2024-11-20 12:52:36.371661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237120 ] 00:38:30.787 [2024-11-20 12:52:36.443345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.787 [2024-11-20 12:52:36.482397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:31.046 12:52:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:31.046 [2024-11-20 12:52:36.699400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.046 null0 00:38:31.046 [2024-11-20 12:52:36.731462] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:31.046 [2024-11-20 12:52:36.731913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.046 12:52:36 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:31.046 [2024-11-20 12:52:36.763535] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:31.046 request: 00:38:31.046 { 00:38:31.046 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.046 "secure_channel": false, 00:38:31.046 "listen_address": { 00:38:31.046 "trtype": "tcp", 00:38:31.046 "traddr": "127.0.0.1", 00:38:31.046 "trsvcid": "4420" 00:38:31.046 }, 00:38:31.046 "method": "nvmf_subsystem_add_listener", 00:38:31.046 "req_id": 1 00:38:31.046 } 00:38:31.046 Got JSON-RPC error response 00:38:31.046 response: 00:38:31.046 { 00:38:31.046 "code": -32602, 00:38:31.046 "message": "Invalid parameters" 00:38:31.046 } 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:31.046 12:52:36 keyring_file -- keyring/file.sh@47 -- # bperfpid=1237177 00:38:31.046 12:52:36 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1237177 /var/tmp/bperf.sock 00:38:31.046 12:52:36 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1237177 ']' 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:31.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:31.046 12:52:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:31.305 [2024-11-20 12:52:36.818301] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:38:31.305 [2024-11-20 12:52:36.818342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237177 ] 00:38:31.305 [2024-11-20 12:52:36.891559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.305 [2024-11-20 12:52:36.931283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.305 12:52:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:31.305 12:52:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:31.305 12:52:37 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:31.305 12:52:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:31.564 12:52:37 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CwTJ7ZzwOZ 00:38:31.565 12:52:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CwTJ7ZzwOZ 00:38:31.823 12:52:37 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:31.823 12:52:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:31.823 12:52:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.823 12:52:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:31.823 12:52:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.823 12:52:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.fAy2Z46M3R == \/\t\m\p\/\t\m\p\.\f\A\y\2\Z\4\6\M\3\R ]] 00:38:31.823 12:52:37 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:31.823 12:52:37 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:31.823 12:52:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.823 12:52:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.823 12:52:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:32.083 12:52:37 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.CwTJ7ZzwOZ == \/\t\m\p\/\t\m\p\.\C\w\T\J\7\Z\z\w\O\Z ]] 00:38:32.083 12:52:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:32.083 12:52:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:32.083 12:52:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.083 12:52:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.083 12:52:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:32.083 12:52:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.342 12:52:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:32.342 12:52:37 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:32.342 12:52:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:32.342 12:52:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.342 12:52:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.342 12:52:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:32.342 12:52:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.600 12:52:38 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:32.600 12:52:38 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:32.600 12:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:32.600 [2024-11-20 12:52:38.303565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:32.859 nvme0n1 00:38:32.859 12:52:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.859 12:52:38 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:32.859 12:52:38 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.859 12:52:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:33.118 12:52:38 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:33.118 12:52:38 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:33.118 Running I/O for 1 seconds... 00:38:34.492 21041.00 IOPS, 82.19 MiB/s 00:38:34.492 Latency(us) 00:38:34.492 [2024-11-20T11:52:40.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.492 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:34.492 nvme0n1 : 1.00 21086.57 82.37 0.00 0.00 6060.27 2934.23 9651.67 00:38:34.492 [2024-11-20T11:52:40.256Z] =================================================================================================================== 00:38:34.492 [2024-11-20T11:52:40.256Z] Total : 21086.57 82.37 0.00 0.00 6060.27 2934.23 9651.67 00:38:34.492 { 00:38:34.492 "results": [ 00:38:34.492 { 00:38:34.492 "job": "nvme0n1", 00:38:34.492 "core_mask": "0x2", 00:38:34.492 "workload": "randrw", 00:38:34.492 "percentage": 50, 00:38:34.492 "status": "finished", 00:38:34.492 "queue_depth": 128, 00:38:34.492 "io_size": 4096, 00:38:34.492 "runtime": 1.003909, 00:38:34.492 "iops": 21086.57258775447, 00:38:34.492 "mibps": 82.36942417091589, 00:38:34.492 "io_failed": 0, 00:38:34.492 "io_timeout": 0, 00:38:34.492 "avg_latency_us": 6060.266818289179, 00:38:34.492 "min_latency_us": 2934.2254545454543, 00:38:34.492 "max_latency_us": 9651.665454545455 00:38:34.492 } 00:38:34.492 ], 00:38:34.492 "core_count": 1 00:38:34.492 } 00:38:34.492 12:52:39 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:34.492 12:52:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:34.492 12:52:40 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:34.492 12:52:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:34.492 12:52:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:34.492 12:52:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.492 12:52:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:34.492 12:52:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.751 12:52:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:34.751 12:52:40 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:34.751 12:52:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:34.751 12:52:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:34.751 12:52:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.751 12:52:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.751 12:52:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:34.751 12:52:40 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:34.751 12:52:40 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:34.751 12:52:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:34.751 12:52:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:35.010 [2024-11-20 12:52:40.622939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:35.010 [2024-11-20 12:52:40.623141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc4140 (107): Transport endpoint is not connected 00:38:35.010 [2024-11-20 12:52:40.624136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc4140 (9): Bad file descriptor 00:38:35.010 [2024-11-20 12:52:40.625138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:35.010 [2024-11-20 12:52:40.625149] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:35.010 [2024-11-20 12:52:40.625156] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:35.010 [2024-11-20 12:52:40.625165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:35.010 request: 00:38:35.010 { 00:38:35.010 "name": "nvme0", 00:38:35.010 "trtype": "tcp", 00:38:35.010 "traddr": "127.0.0.1", 00:38:35.010 "adrfam": "ipv4", 00:38:35.010 "trsvcid": "4420", 00:38:35.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.010 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.010 "prchk_reftag": false, 00:38:35.010 "prchk_guard": false, 00:38:35.010 "hdgst": false, 00:38:35.010 "ddgst": false, 00:38:35.010 "psk": "key1", 00:38:35.010 "allow_unrecognized_csi": false, 00:38:35.010 "method": "bdev_nvme_attach_controller", 00:38:35.010 "req_id": 1 00:38:35.010 } 00:38:35.010 Got JSON-RPC error response 00:38:35.010 response: 00:38:35.010 { 00:38:35.010 "code": -5, 00:38:35.010 "message": "Input/output error" 00:38:35.010 } 00:38:35.010 12:52:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:35.010 12:52:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:35.010 12:52:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:35.010 12:52:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:35.010 12:52:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:35.010 12:52:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:35.010 12:52:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.010 12:52:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.010 12:52:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:35.010 12:52:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.268 12:52:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:35.268 12:52:40 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:35.268 12:52:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:35.268 12:52:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.268 12:52:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.268 12:52:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:35.268 12:52:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.527 12:52:41 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:35.527 12:52:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:35.527 12:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:35.527 12:52:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:35.527 12:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:35.786 12:52:41 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:35.786 12:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.786 12:52:41 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:36.045 12:52:41 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:36.045 12:52:41 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.fAy2Z46M3R 00:38:36.045 12:52:41 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:36.045 12:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:36.045 [2024-11-20 12:52:41.733005] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fAy2Z46M3R': 0100660 00:38:36.045 [2024-11-20 12:52:41.733030] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:36.045 request: 00:38:36.045 { 00:38:36.045 "name": "key0", 00:38:36.045 "path": "/tmp/tmp.fAy2Z46M3R", 00:38:36.045 "method": "keyring_file_add_key", 00:38:36.045 "req_id": 1 00:38:36.045 } 00:38:36.045 Got JSON-RPC error response 00:38:36.045 response: 00:38:36.045 { 00:38:36.045 "code": -1, 00:38:36.045 "message": "Operation not permitted" 00:38:36.045 } 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:36.045 12:52:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:36.045 12:52:41 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.fAy2Z46M3R 00:38:36.045 12:52:41 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:36.045 12:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fAy2Z46M3R 00:38:36.303 12:52:41 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.fAy2Z46M3R 00:38:36.304 12:52:41 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:36.304 12:52:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:36.304 12:52:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:36.304 12:52:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:36.304 12:52:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:36.304 12:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:36.562 12:52:42 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:36.562 12:52:42 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:36.562 12:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:36.562 [2024-11-20 12:52:42.286471] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fAy2Z46M3R': No such file or directory 00:38:36.562 [2024-11-20 12:52:42.286492] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:36.562 [2024-11-20 12:52:42.286506] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:36.562 [2024-11-20 12:52:42.286512] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:36.562 [2024-11-20 12:52:42.286519] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:36.562 [2024-11-20 12:52:42.286524] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:36.562 request: 00:38:36.562 { 00:38:36.562 "name": "nvme0", 00:38:36.562 "trtype": "tcp", 00:38:36.562 "traddr": "127.0.0.1", 00:38:36.562 "adrfam": "ipv4", 00:38:36.562 "trsvcid": "4420", 00:38:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:36.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:36.562 "prchk_reftag": false, 00:38:36.562 "prchk_guard": false, 00:38:36.562 "hdgst": false, 00:38:36.562 "ddgst": false, 00:38:36.562 "psk": "key0", 00:38:36.562 "allow_unrecognized_csi": false, 00:38:36.562 "method": "bdev_nvme_attach_controller", 00:38:36.562 "req_id": 1 00:38:36.562 } 00:38:36.562 Got JSON-RPC error response 00:38:36.562 response: 00:38:36.562 { 00:38:36.562 "code": -19, 00:38:36.562 "message": "No such device" 00:38:36.562 } 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:36.562 12:52:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:36.563 12:52:42 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:36.563 12:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:36.821 12:52:42 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VIWVFXHN1e 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:36.821 12:52:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:36.821 12:52:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:36.821 12:52:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:36.821 12:52:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:36.821 12:52:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:36.821 12:52:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VIWVFXHN1e 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VIWVFXHN1e 00:38:36.821 12:52:42 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.VIWVFXHN1e 00:38:36.821 12:52:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VIWVFXHN1e 00:38:36.821 12:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VIWVFXHN1e 00:38:37.080 12:52:42 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:37.080 12:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:37.339 nvme0n1 00:38:37.339 12:52:42 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:37.339 12:52:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.339 12:52:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.339 12:52:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.339 12:52:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.339 12:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.598 12:52:43 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:37.598 12:52:43 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:37.598 12:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:37.857 12:52:43 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:37.857 12:52:43 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.857 12:52:43 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:37.857 12:52:43 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.857 12:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.115 12:52:43 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:38.115 12:52:43 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:38.115 12:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:38.374 12:52:43 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:38.374 12:52:43 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:38.374 12:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.374 12:52:44 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:38.374 12:52:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VIWVFXHN1e 00:38:38.374 12:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VIWVFXHN1e 00:38:38.633 12:52:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CwTJ7ZzwOZ 00:38:38.633 12:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CwTJ7ZzwOZ 00:38:38.892 12:52:44 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:38.892 12:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:39.151 nvme0n1 00:38:39.151 12:52:44 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:39.151 12:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:39.410 12:52:44 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:39.410 "subsystems": [ 00:38:39.410 { 00:38:39.410 "subsystem": "keyring", 00:38:39.410 "config": [ 00:38:39.410 { 00:38:39.410 "method": "keyring_file_add_key", 00:38:39.410 "params": { 00:38:39.410 "name": "key0", 00:38:39.410 "path": "/tmp/tmp.VIWVFXHN1e" 00:38:39.410 } 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "method": "keyring_file_add_key", 00:38:39.410 "params": { 00:38:39.410 "name": "key1", 00:38:39.410 "path": "/tmp/tmp.CwTJ7ZzwOZ" 00:38:39.410 } 00:38:39.410 } 00:38:39.410 ] 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "subsystem": "iobuf", 00:38:39.410 "config": [ 00:38:39.410 { 00:38:39.410 "method": "iobuf_set_options", 00:38:39.410 "params": { 00:38:39.410 "small_pool_count": 8192, 00:38:39.410 "large_pool_count": 1024, 00:38:39.410 "small_bufsize": 8192, 00:38:39.410 "large_bufsize": 135168, 00:38:39.410 "enable_numa": false 00:38:39.410 } 00:38:39.410 } 00:38:39.410 ] 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "subsystem": "sock", 00:38:39.410 "config": [ 00:38:39.410 { 00:38:39.410 "method": "sock_set_default_impl", 00:38:39.410 "params": { 00:38:39.410 "impl_name": "posix" 00:38:39.410 } 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "method": "sock_impl_set_options", 00:38:39.410 "params": { 00:38:39.410 "impl_name": "ssl", 00:38:39.410 "recv_buf_size": 4096, 00:38:39.410 "send_buf_size": 4096, 00:38:39.410 "enable_recv_pipe": true, 00:38:39.410 "enable_quickack": false, 00:38:39.410 "enable_placement_id": 0, 00:38:39.410 "enable_zerocopy_send_server": true, 00:38:39.410 "enable_zerocopy_send_client": false, 00:38:39.410 "zerocopy_threshold": 0, 00:38:39.410 "tls_version": 0, 00:38:39.410 "enable_ktls": false 00:38:39.410 } 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "method": "sock_impl_set_options", 00:38:39.410 "params": { 00:38:39.410 "impl_name": "posix", 00:38:39.410 "recv_buf_size": 2097152, 00:38:39.410 "send_buf_size": 2097152, 00:38:39.410 "enable_recv_pipe": true, 00:38:39.410 "enable_quickack": false, 00:38:39.410 "enable_placement_id": 0, 00:38:39.410 "enable_zerocopy_send_server": true, 00:38:39.410 "enable_zerocopy_send_client": false, 00:38:39.410 "zerocopy_threshold": 0, 00:38:39.410 "tls_version": 0, 00:38:39.410 "enable_ktls": false 00:38:39.410 } 00:38:39.410 } 00:38:39.410 ] 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "subsystem": "vmd", 00:38:39.410 "config": [] 00:38:39.410 }, 00:38:39.410 { 00:38:39.410 "subsystem": "accel", 00:38:39.410 "config": [ 00:38:39.410 { 00:38:39.410 "method": "accel_set_options", 00:38:39.410 "params": { 00:38:39.411 "small_cache_size": 128, 00:38:39.411 "large_cache_size": 16, 00:38:39.411 "task_count": 2048, 00:38:39.411 "sequence_count": 2048, 00:38:39.411 "buf_count": 2048 00:38:39.411 } 00:38:39.411 } 00:38:39.411 ] 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "subsystem": "bdev", 00:38:39.411 "config": [ 00:38:39.411 { 00:38:39.411 "method": "bdev_set_options", 00:38:39.411 "params": { 00:38:39.411 "bdev_io_pool_size": 65535, 00:38:39.411 "bdev_io_cache_size": 256, 00:38:39.411 "bdev_auto_examine": true, 00:38:39.411 "iobuf_small_cache_size": 128, 00:38:39.411 "iobuf_large_cache_size": 16 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "bdev_raid_set_options", 00:38:39.411 "params": { 00:38:39.411 "process_window_size_kb": 1024, 00:38:39.411 "process_max_bandwidth_mb_sec": 0 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "bdev_iscsi_set_options", 00:38:39.411 "params": { 00:38:39.411 "timeout_sec": 30 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "bdev_nvme_set_options", 00:38:39.411 "params": { 00:38:39.411 "action_on_timeout": "none", 00:38:39.411 "timeout_us": 0, 00:38:39.411 "timeout_admin_us": 0, 00:38:39.411 "keep_alive_timeout_ms": 10000, 00:38:39.411 "arbitration_burst": 0, 00:38:39.411 "low_priority_weight": 0, 00:38:39.411 "medium_priority_weight": 0, 00:38:39.411 "high_priority_weight": 0, 00:38:39.411 "nvme_adminq_poll_period_us": 10000, 00:38:39.411 "nvme_ioq_poll_period_us": 0, 00:38:39.411 "io_queue_requests": 512, 00:38:39.411 "delay_cmd_submit": true, 00:38:39.411 "transport_retry_count": 4, 00:38:39.411 "bdev_retry_count": 3, 00:38:39.411 "transport_ack_timeout": 0, 00:38:39.411 "ctrlr_loss_timeout_sec": 0, 00:38:39.411 "reconnect_delay_sec": 0, 00:38:39.411 "fast_io_fail_timeout_sec": 0, 00:38:39.411 "disable_auto_failback": false, 00:38:39.411 "generate_uuids": false, 00:38:39.411 "transport_tos": 0, 00:38:39.411 "nvme_error_stat": false, 00:38:39.411 "rdma_srq_size": 0, 00:38:39.411 "io_path_stat": false, 00:38:39.411 "allow_accel_sequence": false, 00:38:39.411 "rdma_max_cq_size": 0, 00:38:39.411 "rdma_cm_event_timeout_ms": 0, 00:38:39.411 "dhchap_digests": [ 00:38:39.411 "sha256", 00:38:39.411 "sha384", 00:38:39.411 "sha512" 00:38:39.411 ], 00:38:39.411 "dhchap_dhgroups": [ 00:38:39.411 "null", 00:38:39.411 "ffdhe2048", 00:38:39.411 "ffdhe3072", 00:38:39.411 "ffdhe4096", 00:38:39.411 "ffdhe6144", 00:38:39.411 "ffdhe8192" 00:38:39.411 ] 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "bdev_nvme_attach_controller", 00:38:39.411 "params": { 00:38:39.411 "name": "nvme0", 00:38:39.411 "trtype": "TCP", 00:38:39.411 "adrfam": "IPv4", 00:38:39.411 "traddr": "127.0.0.1", 00:38:39.411 "trsvcid": "4420", 00:38:39.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:39.411 "prchk_reftag": false, 00:38:39.411 "prchk_guard": false, 00:38:39.411 "ctrlr_loss_timeout_sec": 0, 00:38:39.411 "reconnect_delay_sec": 0, 00:38:39.411 "fast_io_fail_timeout_sec": 0, 00:38:39.411 "psk": "key0", 00:38:39.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:39.411 "hdgst": false, 00:38:39.411 "ddgst": false, 00:38:39.411 "multipath": "multipath" 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "bdev_nvme_set_hotplug", 00:38:39.411 "params": { 00:38:39.411 "period_us": 100000, 00:38:39.411 "enable": false 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "bdev_wait_for_examine" 00:38:39.411 } 00:38:39.411 ] 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "subsystem": "nbd", 00:38:39.411 "config": [] 00:38:39.411 } 00:38:39.411 ] 00:38:39.411 }' 00:38:39.411 12:52:44 keyring_file -- keyring/file.sh@115 -- # killprocess 1237177 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1237177 ']' 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1237177 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237177 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237177' 00:38:39.411 killing process with pid 1237177 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@973 -- # kill 1237177 00:38:39.411 Received shutdown signal, test time was about 1.000000 seconds 00:38:39.411 00:38:39.411 Latency(us) 00:38:39.411 [2024-11-20T11:52:45.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.411 [2024-11-20T11:52:45.175Z] =================================================================================================================== 00:38:39.411 [2024-11-20T11:52:45.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:39.411 12:52:44 keyring_file -- common/autotest_common.sh@978 -- # wait 1237177 00:38:39.411 12:52:45 keyring_file -- keyring/file.sh@118 -- # bperfpid=1238885 00:38:39.411 12:52:45 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1238885 /var/tmp/bperf.sock 00:38:39.411 12:52:45 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1238885 ']' 00:38:39.411 12:52:45 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:39.411 12:52:45 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:39.411 12:52:45 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:39.411 12:52:45 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:39.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:39.411 12:52:45 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:39.411 12:52:45 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:39.411 "subsystems": [ 00:38:39.411 { 00:38:39.411 "subsystem": "keyring", 00:38:39.411 "config": [ 00:38:39.411 { 00:38:39.411 "method": "keyring_file_add_key", 00:38:39.411 "params": { 00:38:39.411 "name": "key0", 00:38:39.411 "path": "/tmp/tmp.VIWVFXHN1e" 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "keyring_file_add_key", 00:38:39.411 "params": { 00:38:39.411 "name": "key1", 00:38:39.411 "path": "/tmp/tmp.CwTJ7ZzwOZ" 00:38:39.411 } 00:38:39.411 } 00:38:39.411 ] 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "subsystem": "iobuf", 00:38:39.411 "config": [ 00:38:39.411 { 00:38:39.411 "method": "iobuf_set_options", 00:38:39.411 "params": { 00:38:39.411 "small_pool_count": 8192, 00:38:39.411 "large_pool_count": 1024, 00:38:39.411 "small_bufsize": 8192, 00:38:39.411 "large_bufsize": 135168, 00:38:39.411 "enable_numa": false 00:38:39.411 } 00:38:39.411 } 00:38:39.411 ] 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "subsystem": "sock", 00:38:39.411 "config": [ 00:38:39.411 { 00:38:39.411 "method": "sock_set_default_impl", 00:38:39.411 "params": { 00:38:39.411 "impl_name": "posix" 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "sock_impl_set_options", 00:38:39.411 "params": { 00:38:39.411 "impl_name": "ssl", 00:38:39.411 "recv_buf_size": 4096, 00:38:39.411 "send_buf_size": 4096, 00:38:39.411 "enable_recv_pipe": true, 00:38:39.411 "enable_quickack": false, 00:38:39.411 "enable_placement_id": 0, 00:38:39.411 "enable_zerocopy_send_server": true, 00:38:39.411 "enable_zerocopy_send_client": false, 00:38:39.411 "zerocopy_threshold": 0, 00:38:39.411 "tls_version": 0, 00:38:39.411 "enable_ktls": false 00:38:39.411 } 00:38:39.411 }, 00:38:39.411 { 00:38:39.411 "method": "sock_impl_set_options", 00:38:39.411 "params": { 00:38:39.411 "impl_name": "posix", 00:38:39.411 "recv_buf_size": 2097152, 00:38:39.411 "send_buf_size": 2097152, 00:38:39.411 "enable_recv_pipe": true, 00:38:39.412 "enable_quickack": false, 00:38:39.412 "enable_placement_id": 0, 00:38:39.412 "enable_zerocopy_send_server": true, 00:38:39.412 "enable_zerocopy_send_client": false, 00:38:39.412 "zerocopy_threshold": 0, 00:38:39.412 "tls_version": 0, 00:38:39.412 "enable_ktls": false 00:38:39.412 } 00:38:39.412 } 00:38:39.412 ] 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "subsystem": "vmd", 00:38:39.412 "config": [] 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "subsystem": "accel", 00:38:39.412 "config": [ 00:38:39.412 { 00:38:39.412 "method": "accel_set_options", 00:38:39.412 "params": { 00:38:39.412 "small_cache_size": 128, 00:38:39.412 "large_cache_size": 16, 00:38:39.412 "task_count": 2048, 00:38:39.412 "sequence_count": 2048, 00:38:39.412 "buf_count": 2048 00:38:39.412 } 00:38:39.412 } 00:38:39.412 ] 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "subsystem": "bdev", 00:38:39.412 "config": [ 00:38:39.412 { 00:38:39.412 "method": "bdev_set_options", 00:38:39.412 "params": { 00:38:39.412 "bdev_io_pool_size": 65535, 00:38:39.412 "bdev_io_cache_size": 256, 00:38:39.412 "bdev_auto_examine": true, 00:38:39.412 "iobuf_small_cache_size": 128, 00:38:39.412 "iobuf_large_cache_size": 16 00:38:39.412 } 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "method": "bdev_raid_set_options", 00:38:39.412 "params": { 00:38:39.412 "process_window_size_kb": 1024, 00:38:39.412 "process_max_bandwidth_mb_sec": 0 00:38:39.412 } 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "method": "bdev_iscsi_set_options", 00:38:39.412 "params": { 00:38:39.412 "timeout_sec": 30 00:38:39.412 } 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "method": "bdev_nvme_set_options", 00:38:39.412 "params": { 00:38:39.412 "action_on_timeout": "none", 00:38:39.412 "timeout_us": 0, 00:38:39.412 "timeout_admin_us": 0, 00:38:39.412 "keep_alive_timeout_ms": 10000, 00:38:39.412 "arbitration_burst": 0, 00:38:39.412 "low_priority_weight": 0, 00:38:39.412 "medium_priority_weight": 0, 00:38:39.412 "high_priority_weight": 0, 00:38:39.412 "nvme_adminq_poll_period_us": 10000, 00:38:39.412 "nvme_ioq_poll_period_us": 0, 00:38:39.412 "io_queue_requests": 512, 00:38:39.412 "delay_cmd_submit": true, 00:38:39.412 "transport_retry_count": 4, 00:38:39.412 "bdev_retry_count": 3, 00:38:39.412 "transport_ack_timeout": 0, 00:38:39.412 "ctrlr_loss_timeout_sec": 0, 00:38:39.412 "reconnect_delay_sec": 0, 00:38:39.412 "fast_io_fail_timeout_sec": 0, 00:38:39.412 "disable_auto_failback": false, 00:38:39.412 "generate_uuids": false, 00:38:39.412 "transport_tos": 0, 00:38:39.412 "nvme_error_stat": false, 00:38:39.412 "rdma_srq_size": 0, 00:38:39.412 "io_path_stat": false, 00:38:39.412 "allow_accel_sequence": false, 00:38:39.412 "rdma_max_cq_size": 0, 00:38:39.412 "rdma_cm_event_timeout_ms": 0, 00:38:39.412 "dhchap_digests": [ 00:38:39.412 "sha256", 00:38:39.412 "sha384", 00:38:39.412 "sha512" 00:38:39.412 ], 00:38:39.412 "dhchap_dhgroups": [ 00:38:39.412 "null", 00:38:39.412 "ffdhe2048", 00:38:39.412 "ffdhe3072", 00:38:39.412 "ffdhe4096", 00:38:39.412 "ffdhe6144", 00:38:39.412 "ffdhe8192" 00:38:39.412 ] 00:38:39.412 } 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "method": "bdev_nvme_attach_controller", 00:38:39.412 "params": { 00:38:39.412 "name": "nvme0", 00:38:39.412 "trtype": "TCP", 00:38:39.412 "adrfam": "IPv4", 00:38:39.412 "traddr": "127.0.0.1", 00:38:39.412 "trsvcid": "4420", 00:38:39.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:39.412 "prchk_reftag": false, 00:38:39.412 "prchk_guard": false, 00:38:39.412 "ctrlr_loss_timeout_sec": 0, 00:38:39.412 "reconnect_delay_sec": 0, 00:38:39.412 "fast_io_fail_timeout_sec": 0, 00:38:39.412 "psk": "key0", 00:38:39.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:39.412 "hdgst": false, 00:38:39.412 "ddgst": false, 00:38:39.412 "multipath": "multipath" 00:38:39.412 } 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "method": "bdev_nvme_set_hotplug", 00:38:39.412 "params": { 00:38:39.412 "period_us": 100000, 00:38:39.412 "enable": false 00:38:39.412 } 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "method": "bdev_wait_for_examine" 00:38:39.412 } 00:38:39.412 ] 00:38:39.412 }, 00:38:39.412 { 00:38:39.412 "subsystem": "nbd", 00:38:39.412 "config": [] 00:38:39.412 } 00:38:39.412 ] 00:38:39.412 }' 00:38:39.412 12:52:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:39.671 [2024-11-20 12:52:45.193759] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:38:39.671 [2024-11-20 12:52:45.193808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238885 ] 00:38:39.671 [2024-11-20 12:52:45.266630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.671 [2024-11-20 12:52:45.301437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:39.930 [2024-11-20 12:52:45.459324] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:40.497 12:52:46 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:40.497 12:52:46 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:40.497 12:52:46 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:40.497 12:52:46 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:40.497 12:52:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.497 12:52:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:40.497 12:52:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:40.497 12:52:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.497 12:52:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.497 12:52:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.497 12:52:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.497 12:52:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.756 12:52:46 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:40.756 12:52:46 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:40.756 12:52:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:40.756 12:52:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.756 12:52:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.756 12:52:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:40.756 12:52:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:41.015 12:52:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VIWVFXHN1e /tmp/tmp.CwTJ7ZzwOZ 00:38:41.015 12:52:46 keyring_file -- keyring/file.sh@20 -- # killprocess 1238885 00:38:41.015 12:52:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1238885 ']' 00:38:41.015 12:52:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1238885 00:38:41.015 12:52:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:41.015 12:52:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.015 12:52:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1238885 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1238885' 00:38:41.274 killing process with pid 1238885 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@973 -- # kill 1238885 00:38:41.274 Received shutdown signal, test time was about 1.000000 seconds 00:38:41.274 00:38:41.274 Latency(us) 00:38:41.274 [2024-11-20T11:52:47.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.274 [2024-11-20T11:52:47.038Z] =================================================================================================================== 00:38:41.274 [2024-11-20T11:52:47.038Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@978 -- # wait 1238885 00:38:41.274 12:52:46 keyring_file -- keyring/file.sh@21 -- # killprocess 1237120 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1237120 ']' 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1237120 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237120 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237120' 00:38:41.274 killing process with pid 1237120 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@973 -- # kill 1237120 00:38:41.274 12:52:46 keyring_file -- common/autotest_common.sh@978 -- # wait 1237120 00:38:41.532 00:38:41.532 real 0m11.266s 00:38:41.532 user 0m27.819s 00:38:41.532 sys 0m2.617s 00:38:41.532 12:52:47 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.532 12:52:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.532 ************************************ 00:38:41.532 END TEST keyring_file 00:38:41.532 ************************************ 00:38:41.791 12:52:47 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:41.791 12:52:47 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:41.791 12:52:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:41.791 12:52:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.791 12:52:47 -- common/autotest_common.sh@10 -- # set +x 00:38:41.791 ************************************ 00:38:41.791 START TEST keyring_linux 00:38:41.791 ************************************ 00:38:41.791 12:52:47 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:41.791 Joined session keyring: 171159529 00:38:41.791 * Looking for test storage... 00:38:41.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:41.791 12:52:47 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:41.791 12:52:47 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:41.791 12:52:47 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:41.791 12:52:47 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:41.791 12:52:47 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:41.792 12:52:47 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.792 12:52:47 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.792 --rc genhtml_branch_coverage=1 00:38:41.792 --rc genhtml_function_coverage=1 00:38:41.792 --rc genhtml_legend=1 00:38:41.792 --rc geninfo_all_blocks=1 00:38:41.792 --rc geninfo_unexecuted_blocks=1 00:38:41.792 00:38:41.792 ' 00:38:41.792 12:52:47 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.792 --rc genhtml_branch_coverage=1 00:38:41.792 --rc genhtml_function_coverage=1 00:38:41.792 --rc genhtml_legend=1 00:38:41.792 --rc geninfo_all_blocks=1 00:38:41.792 --rc geninfo_unexecuted_blocks=1 00:38:41.792 00:38:41.792 ' 00:38:41.792 12:52:47 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.792 --rc genhtml_branch_coverage=1 00:38:41.792 --rc genhtml_function_coverage=1 00:38:41.792 --rc genhtml_legend=1 00:38:41.792 --rc geninfo_all_blocks=1 00:38:41.792 --rc geninfo_unexecuted_blocks=1 00:38:41.792 00:38:41.792 ' 00:38:41.792 12:52:47 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.792 --rc genhtml_branch_coverage=1 00:38:41.792 --rc genhtml_function_coverage=1 00:38:41.792 --rc genhtml_legend=1 00:38:41.792 --rc geninfo_all_blocks=1 00:38:41.792 --rc geninfo_unexecuted_blocks=1 00:38:41.792 00:38:41.792 ' 00:38:41.792 12:52:47 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:41.792 12:52:47 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:005363bc-ad7e-eb11-906e-0017a4403562 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=005363bc-ad7e-eb11-906e-0017a4403562 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.792 12:52:47 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.792 12:52:47 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:42.051 12:52:47 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.051 12:52:47 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.051 12:52:47 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.051 12:52:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.051 12:52:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.051 12:52:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.051 12:52:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:42.051 12:52:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:42.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:42.051 /tmp/:spdk-test:key0 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:42.051 12:52:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:42.051 12:52:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:42.051 /tmp/:spdk-test:key1 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1239265 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:42.051 12:52:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1239265 00:38:42.051 12:52:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1239265 ']' 00:38:42.051 12:52:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.051 12:52:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:42.051 12:52:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.051 12:52:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:42.051 12:52:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:42.051 [2024-11-20 12:52:47.696116] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:38:42.051 [2024-11-20 12:52:47.696167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239265 ] 00:38:42.051 [2024-11-20 12:52:47.766578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.051 [2024-11-20 12:52:47.805827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.310 12:52:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:42.310 12:52:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:42.310 12:52:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:42.310 12:52:48 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.310 12:52:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:42.310 [2024-11-20 12:52:48.015252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:42.310 null0 00:38:42.310 [2024-11-20 12:52:48.047309] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:42.310 [2024-11-20 12:52:48.047763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:42.310 12:52:48 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.310 12:52:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:42.568 505856418 00:38:42.568 12:52:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:42.568 991611496 00:38:42.568 12:52:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1239413 00:38:42.568 12:52:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1239413 /var/tmp/bperf.sock 00:38:42.568 12:52:48 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1239413 ']' 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:42.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:42.568 [2024-11-20 12:52:48.119047] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:38:42.568 [2024-11-20 12:52:48.119088] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239413 ] 00:38:42.568 [2024-11-20 12:52:48.192856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.568 [2024-11-20 12:52:48.232099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:42.568 12:52:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:42.568 12:52:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:42.568 12:52:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:42.827 12:52:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:42.827 12:52:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:43.084 12:52:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:43.084 12:52:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:43.084 [2024-11-20 12:52:48.825783] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:43.343 nvme0n1 00:38:43.343 12:52:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:43.343 12:52:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:43.343 12:52:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:43.343 12:52:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:43.343 12:52:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.343 12:52:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:43.343 12:52:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:43.343 12:52:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:43.343 12:52:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:43.343 12:52:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:43.343 12:52:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.343 12:52:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:43.343 12:52:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.602 12:52:49 keyring_linux -- keyring/linux.sh@25 -- # sn=505856418 00:38:43.602 12:52:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:43.602 12:52:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:43.602 12:52:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 505856418 == \5\0\5\8\5\6\4\1\8 ]] 00:38:43.602 12:52:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 505856418 00:38:43.603 12:52:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:43.603 12:52:49 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:43.603 Running I/O for 1 seconds... 00:38:44.986 23196.00 IOPS, 90.61 MiB/s 00:38:44.986 Latency(us) 00:38:44.986 [2024-11-20T11:52:50.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:44.986 nvme0n1 : 1.01 23195.59 90.61 0.00 0.00 5501.70 3872.58 8757.99 00:38:44.986 [2024-11-20T11:52:50.750Z] =================================================================================================================== 00:38:44.986 [2024-11-20T11:52:50.750Z] Total : 23195.59 90.61 0.00 0.00 5501.70 3872.58 8757.99 00:38:44.986 { 00:38:44.986 "results": [ 00:38:44.986 { 00:38:44.986 "job": "nvme0n1", 00:38:44.986 "core_mask": "0x2", 00:38:44.986 "workload": "randread", 00:38:44.986 "status": "finished", 00:38:44.986 "queue_depth": 128, 00:38:44.986 "io_size": 4096, 00:38:44.986 "runtime": 1.005579, 00:38:44.986 "iops": 23195.591793384705, 00:38:44.986 "mibps": 90.607780442909, 00:38:44.986 "io_failed": 0, 00:38:44.986 "io_timeout": 0, 00:38:44.986 "avg_latency_us": 5501.701542550911, 00:38:44.986 "min_latency_us": 3872.581818181818, 00:38:44.986 "max_latency_us": 8757.992727272727 00:38:44.986 } 00:38:44.986 ], 00:38:44.986 "core_count": 1 00:38:44.986 } 00:38:44.986 12:52:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:44.986 12:52:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:44.986 12:52:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:44.986 12:52:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:44.986 12:52:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:44.986 12:52:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:44.986 12:52:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.986 12:52:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:45.300 12:52:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:45.300 [2024-11-20 12:52:50.936072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:45.300 [2024-11-20 12:52:50.936847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90eb0 (107): Transport endpoint is not connected 00:38:45.300 [2024-11-20 12:52:50.937843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90eb0 (9): Bad file descriptor 00:38:45.300 [2024-11-20 12:52:50.938845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:45.300 [2024-11-20 12:52:50.938856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:45.300 [2024-11-20 12:52:50.938862] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:45.300 [2024-11-20 12:52:50.938870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:45.300 request: 00:38:45.300 { 00:38:45.300 "name": "nvme0", 00:38:45.300 "trtype": "tcp", 00:38:45.300 "traddr": "127.0.0.1", 00:38:45.300 "adrfam": "ipv4", 00:38:45.300 "trsvcid": "4420", 00:38:45.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.300 "prchk_reftag": false, 00:38:45.300 "prchk_guard": false, 00:38:45.300 "hdgst": false, 00:38:45.300 "ddgst": false, 00:38:45.300 "psk": ":spdk-test:key1", 00:38:45.300 "allow_unrecognized_csi": false, 00:38:45.300 "method": "bdev_nvme_attach_controller", 00:38:45.300 "req_id": 1 00:38:45.300 } 00:38:45.300 Got JSON-RPC error response 00:38:45.300 response: 00:38:45.300 { 00:38:45.300 "code": -5, 00:38:45.300 "message": "Input/output error" 00:38:45.300 } 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:45.300 12:52:50 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@33 -- # sn=505856418 00:38:45.300 12:52:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 505856418 00:38:45.300 1 links removed 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@33 -- # sn=991611496 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 991611496 00:38:45.301 1 links removed 00:38:45.301 12:52:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1239413 00:38:45.301 12:52:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1239413 ']' 00:38:45.301 12:52:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1239413 00:38:45.301 12:52:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:45.301 12:52:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.301 12:52:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1239413 00:38:45.301 12:52:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:45.301 12:52:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:45.301 12:52:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1239413' 00:38:45.301 killing process with pid 1239413 00:38:45.301 12:52:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 1239413 00:38:45.301 Received shutdown signal, test time was about 1.000000 seconds 00:38:45.301 00:38:45.301 Latency(us) 00:38:45.301 [2024-11-20T11:52:51.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.301 [2024-11-20T11:52:51.065Z] =================================================================================================================== 00:38:45.301 [2024-11-20T11:52:51.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:45.301 12:52:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 1239413 00:38:45.596 12:52:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1239265 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1239265 ']' 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1239265 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1239265 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1239265' 00:38:45.596 killing process with pid 1239265 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 1239265 00:38:45.596 12:52:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 1239265 00:38:45.883 00:38:45.883 real 0m4.164s 00:38:45.883 user 0m7.737s 00:38:45.883 sys 0m1.415s 00:38:45.883 12:52:51 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.883 12:52:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:45.883 ************************************ 00:38:45.883 END TEST keyring_linux 00:38:45.883 ************************************ 00:38:45.883 12:52:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:45.883 12:52:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:45.883 12:52:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:45.883 12:52:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:45.883 12:52:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:45.883 12:52:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:45.883 12:52:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:45.883 12:52:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:45.883 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:38:45.883 12:52:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:45.883 12:52:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:45.883 12:52:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:45.883 12:52:51 -- common/autotest_common.sh@10 -- # set +x 00:38:51.157 INFO: APP EXITING 00:38:51.157 INFO: killing all VMs 00:38:51.157 INFO: killing vhost app 00:38:51.157 INFO: EXIT DONE 00:38:55.349 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:38:55.349 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:38:55.349 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:d9:00.0 (8086 0a54): Already using the nvme driver 00:38:55.349 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:38:55.349 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:38:58.638 Cleaning 00:38:58.638 Removing: /var/run/dpdk/spdk0/config 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:58.638 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:58.638 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:58.638 Removing: /var/run/dpdk/spdk1/config 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:58.638 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:58.638 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:58.638 Removing: /var/run/dpdk/spdk2/config 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:58.638 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:58.638 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:58.638 Removing: /var/run/dpdk/spdk3/config 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:58.638 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:58.638 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:58.638 Removing: /var/run/dpdk/spdk4/config 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:58.638 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:58.638 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:58.638 Removing: /dev/shm/bdev_svc_trace.1 00:38:58.638 Removing: /dev/shm/nvmf_trace.0 00:38:58.638 Removing: /dev/shm/spdk_tgt_trace.pid715470 00:38:58.638 Removing: /var/run/dpdk/spdk0 00:38:58.638 Removing: /var/run/dpdk/spdk1 00:38:58.638 Removing: /var/run/dpdk/spdk2 00:38:58.638 Removing: /var/run/dpdk/spdk3 00:38:58.638 Removing: /var/run/dpdk/spdk4 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1002116 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1006875 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1011153 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1019257 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1019273 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1026096 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1026355 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1026613 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1027137 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1027145 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1032264 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1032903 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1037614 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1040474 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1046157 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1051970 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1061203 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1069558 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1069616 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1090395 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1090954 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1091490 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1092114 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1092868 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1093410 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1093986 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1094684 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1099082 00:38:58.638 Removing: /var/run/dpdk/spdk_pid1099352 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1105768 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1106081 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1111806 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1116394 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1127550 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1128090 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1132691 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1132973 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1137560 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1143479 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1146394 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1157129 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1166489 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1168734 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1169780 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1186901 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1190990 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1194039 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1202891 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1202897 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1208420 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1210624 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1212611 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1213930 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1216564 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1217771 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1227631 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1228153 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1228686 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1231569 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1232115 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1232665 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1237120 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1237177 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1238885 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1239265 00:38:58.897 Removing: /var/run/dpdk/spdk_pid1239413 00:38:58.897 Removing: /var/run/dpdk/spdk_pid710255 00:38:58.897 Removing: /var/run/dpdk/spdk_pid713552 00:38:58.897 Removing: /var/run/dpdk/spdk_pid715470 00:38:58.897 Removing: /var/run/dpdk/spdk_pid716020 00:38:58.898 Removing: /var/run/dpdk/spdk_pid717093 00:38:58.898 Removing: /var/run/dpdk/spdk_pid717186 00:38:58.898 Removing: /var/run/dpdk/spdk_pid718220 00:38:58.898 Removing: /var/run/dpdk/spdk_pid718478 00:38:58.898 Removing: /var/run/dpdk/spdk_pid718864 00:38:58.898 Removing: /var/run/dpdk/spdk_pid723221 00:38:58.898 Removing: /var/run/dpdk/spdk_pid726508 00:38:58.898 Removing: /var/run/dpdk/spdk_pid726832 00:38:58.898 Removing: /var/run/dpdk/spdk_pid727153 00:38:58.898 Removing: /var/run/dpdk/spdk_pid727503 00:38:58.898 Removing: /var/run/dpdk/spdk_pid727830 00:38:58.898 Removing: /var/run/dpdk/spdk_pid728111 00:38:58.898 Removing: /var/run/dpdk/spdk_pid728397 00:38:58.898 Removing: /var/run/dpdk/spdk_pid728712 00:38:58.898 Removing: /var/run/dpdk/spdk_pid729565 00:38:58.898 Removing: /var/run/dpdk/spdk_pid733451 00:38:58.898 Removing: /var/run/dpdk/spdk_pid733668 00:38:58.898 Removing: /var/run/dpdk/spdk_pid733789 00:38:58.898 Removing: /var/run/dpdk/spdk_pid734041 00:38:58.898 Removing: /var/run/dpdk/spdk_pid734551 00:38:58.898 Removing: /var/run/dpdk/spdk_pid734618 00:38:59.157 Removing: /var/run/dpdk/spdk_pid735170 00:38:59.157 Removing: /var/run/dpdk/spdk_pid735177 00:38:59.157 Removing: /var/run/dpdk/spdk_pid735476 00:38:59.157 Removing: /var/run/dpdk/spdk_pid735736 00:38:59.157 Removing: /var/run/dpdk/spdk_pid736023 00:38:59.157 Removing: /var/run/dpdk/spdk_pid736047 00:38:59.157 Removing: /var/run/dpdk/spdk_pid736668 00:38:59.157 Removing: /var/run/dpdk/spdk_pid736950 00:38:59.157 Removing: /var/run/dpdk/spdk_pid737281 00:38:59.157 Removing: /var/run/dpdk/spdk_pid741373 00:38:59.157 Removing: /var/run/dpdk/spdk_pid746106 00:38:59.157 Removing: /var/run/dpdk/spdk_pid757195 00:38:59.157 Removing: /var/run/dpdk/spdk_pid757808 00:38:59.157 Removing: /var/run/dpdk/spdk_pid762434 00:38:59.157 Removing: /var/run/dpdk/spdk_pid762728 00:38:59.157 Removing: /var/run/dpdk/spdk_pid767387 00:38:59.157 Removing: /var/run/dpdk/spdk_pid773789 00:38:59.157 Removing: /var/run/dpdk/spdk_pid776761 00:38:59.157 Removing: /var/run/dpdk/spdk_pid788544 00:38:59.157 Removing: /var/run/dpdk/spdk_pid798090 00:38:59.157 Removing: /var/run/dpdk/spdk_pid800132 00:38:59.157 Removing: /var/run/dpdk/spdk_pid801072 00:38:59.157 Removing: /var/run/dpdk/spdk_pid819442 00:38:59.157 Removing: /var/run/dpdk/spdk_pid823824 00:38:59.157 Removing: /var/run/dpdk/spdk_pid873543 00:38:59.157 Removing: /var/run/dpdk/spdk_pid879198 00:38:59.157 Removing: /var/run/dpdk/spdk_pid885260 00:38:59.157 Removing: /var/run/dpdk/spdk_pid892890 00:38:59.157 Removing: /var/run/dpdk/spdk_pid892894 00:38:59.157 Removing: /var/run/dpdk/spdk_pid893706 00:38:59.157 Removing: /var/run/dpdk/spdk_pid894733 00:38:59.157 Removing: /var/run/dpdk/spdk_pid895662 00:38:59.157 Removing: /var/run/dpdk/spdk_pid896306 00:38:59.157 Removing: /var/run/dpdk/spdk_pid896311 00:38:59.157 Removing: /var/run/dpdk/spdk_pid896577 00:38:59.157 Removing: /var/run/dpdk/spdk_pid896600 00:38:59.157 Removing: /var/run/dpdk/spdk_pid896734 00:38:59.157 Removing: /var/run/dpdk/spdk_pid897637 00:38:59.157 Removing: /var/run/dpdk/spdk_pid898676 00:38:59.157 Removing: /var/run/dpdk/spdk_pid899475 00:38:59.157 Removing: /var/run/dpdk/spdk_pid900242 00:38:59.157 Removing: /var/run/dpdk/spdk_pid900256 00:38:59.157 Removing: /var/run/dpdk/spdk_pid900522 00:38:59.157 Removing: /var/run/dpdk/spdk_pid901828 00:38:59.157 Removing: /var/run/dpdk/spdk_pid902789 00:38:59.157 Removing: /var/run/dpdk/spdk_pid911774 00:38:59.157 Removing: /var/run/dpdk/spdk_pid941396 00:38:59.157 Removing: /var/run/dpdk/spdk_pid946295 00:38:59.157 Removing: /var/run/dpdk/spdk_pid948116 00:38:59.157 Removing: /var/run/dpdk/spdk_pid949963 00:38:59.157 Removing: /var/run/dpdk/spdk_pid950228 00:38:59.157 Removing: /var/run/dpdk/spdk_pid950258 00:38:59.157 Removing: /var/run/dpdk/spdk_pid950515 00:38:59.157 Removing: /var/run/dpdk/spdk_pid951088 00:38:59.157 Removing: /var/run/dpdk/spdk_pid953186 00:38:59.157 Removing: /var/run/dpdk/spdk_pid954043 00:38:59.157 Removing: /var/run/dpdk/spdk_pid954560 00:38:59.157 Removing: /var/run/dpdk/spdk_pid956747 00:38:59.157 Removing: /var/run/dpdk/spdk_pid957341 00:38:59.157 Removing: /var/run/dpdk/spdk_pid958124 00:38:59.157 Removing: /var/run/dpdk/spdk_pid962476 00:38:59.157 Removing: /var/run/dpdk/spdk_pid968840 00:38:59.157 Removing: /var/run/dpdk/spdk_pid968841 00:38:59.157 Removing: /var/run/dpdk/spdk_pid968843 00:38:59.157 Removing: /var/run/dpdk/spdk_pid972824 00:38:59.417 Removing: /var/run/dpdk/spdk_pid981725 00:38:59.417 Removing: /var/run/dpdk/spdk_pid986056 00:38:59.417 Removing: /var/run/dpdk/spdk_pid992644 00:38:59.417 Removing: /var/run/dpdk/spdk_pid994112 00:38:59.417 Removing: /var/run/dpdk/spdk_pid995605 00:38:59.417 Removing: /var/run/dpdk/spdk_pid997147 00:38:59.417 Clean 00:38:59.417 12:53:05 -- common/autotest_common.sh@1453 -- # return 0 00:38:59.417 12:53:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:59.417 12:53:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.417 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:38:59.417 12:53:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:59.417 12:53:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.417 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:38:59.417 12:53:05 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:59.417 12:53:05 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:59.417 12:53:05 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:59.417 12:53:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:59.417 12:53:05 -- spdk/autotest.sh@398 -- # hostname 00:38:59.417 12:53:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-15 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:59.676 geninfo: WARNING: invalid characters removed from testname! 00:39:21.613 12:53:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:21.613 12:53:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:22.990 12:53:28 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:24.895 12:53:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.274 12:53:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:28.181 12:53:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:30.088 12:53:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:30.088 12:53:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:30.088 12:53:35 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:30.088 12:53:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:30.088 12:53:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:30.088 12:53:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:30.088 + [[ -n 629706 ]] 00:39:30.088 + sudo kill 629706 00:39:30.098 [Pipeline] } 00:39:30.113 [Pipeline] // stage 00:39:30.120 [Pipeline] } 00:39:30.134 [Pipeline] // timeout 00:39:30.139 [Pipeline] } 00:39:30.154 [Pipeline] // catchError 00:39:30.159 [Pipeline] } 00:39:30.175 [Pipeline] // wrap 00:39:30.181 [Pipeline] } 00:39:30.195 [Pipeline] // catchError 00:39:30.205 [Pipeline] stage 00:39:30.207 [Pipeline] { (Epilogue) 00:39:30.221 [Pipeline] catchError 00:39:30.222 [Pipeline] { 00:39:30.236 [Pipeline] echo 00:39:30.237 Cleanup processes 00:39:30.244 [Pipeline] sh 00:39:30.530 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.530 1251112 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.545 [Pipeline] sh 00:39:30.830 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.830 ++ grep -v 'sudo pgrep' 00:39:30.830 ++ awk '{print $1}' 00:39:30.830 + sudo kill -9 00:39:30.830 + true 00:39:30.844 [Pipeline] sh 00:39:31.129 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:41.121 [Pipeline] sh 00:39:41.410 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:41.410 Artifacts sizes are good 00:39:41.425 [Pipeline] archiveArtifacts 00:39:41.433 Archiving artifacts 00:39:41.610 [Pipeline] sh 00:39:41.957 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:41.972 [Pipeline] cleanWs 00:39:41.983 [WS-CLEANUP] Deleting project workspace... 00:39:41.983 [WS-CLEANUP] Deferred wipeout is used... 00:39:41.990 [WS-CLEANUP] done 00:39:41.992 [Pipeline] } 00:39:42.010 [Pipeline] // catchError 00:39:42.023 [Pipeline] sh 00:39:42.306 + logger -p user.info -t JENKINS-CI 00:39:42.314 [Pipeline] } 00:39:42.329 [Pipeline] // stage 00:39:42.335 [Pipeline] } 00:39:42.349 [Pipeline] // node 00:39:42.355 [Pipeline] End of Pipeline 00:39:42.383 Finished: SUCCESS